Test Report: Docker_Linux_crio_arm64 21997

                    
                      4e6ec0ce1ba9ad510ab2048b3373e13c9f965153:2025-12-05:42642
                    
                

Test fail (57/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.37
44 TestAddons/parallel/Registry 16.96
45 TestAddons/parallel/RegistryCreds 0.48
46 TestAddons/parallel/Ingress 143.83
47 TestAddons/parallel/InspektorGadget 5.26
48 TestAddons/parallel/MetricsServer 5.37
50 TestAddons/parallel/CSI 39.23
51 TestAddons/parallel/Headlamp 3.3
52 TestAddons/parallel/CloudSpanner 6.29
53 TestAddons/parallel/LocalPath 9.7
54 TestAddons/parallel/NvidiaDevicePlugin 6.31
55 TestAddons/parallel/Yakd 6.27
106 TestFunctional/parallel/ServiceCmdConnect 603.6
134 TestFunctional/parallel/ServiceCmd/DeployApp 601
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
144 TestFunctional/parallel/ServiceCmd/Format 0.49
145 TestFunctional/parallel/ServiceCmd/URL 0.55
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.91
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 509.89
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.65
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.43
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.39
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.41
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 736.3
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.25
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.76
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.14
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.43
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.81
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.44
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.1
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 108.42
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.3
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.4
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 0.88
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.88
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.3
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.2
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.37
293 TestJSONOutput/pause/Command 1.7
299 TestJSONOutput/unpause/Command 1.6
358 TestKubernetesUpgrade 804.43
384 TestPause/serial/Pause 6.21
443 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7200.069
x
+
TestAddons/serial/Volcano (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable volcano --alsologtostderr -v=1: exit status 11 (365.577311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:13:44.534551  451031 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:13:44.535637  451031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:44.535655  451031 out.go:374] Setting ErrFile to fd 2...
	I1205 06:13:44.535662  451031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:44.536115  451031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:13:44.536595  451031 mustload.go:66] Loading cluster: addons-640282
	I1205 06:13:44.537395  451031 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:44.537454  451031 addons.go:622] checking whether the cluster is paused
	I1205 06:13:44.537623  451031 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:44.537643  451031 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:13:44.538407  451031 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:13:44.576764  451031 ssh_runner.go:195] Run: systemctl --version
	I1205 06:13:44.576819  451031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:13:44.595117  451031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:13:44.722892  451031 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:13:44.722985  451031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:13:44.761304  451031 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:13:44.761326  451031 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:13:44.761332  451031 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:13:44.761336  451031 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:13:44.761340  451031 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:13:44.761343  451031 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:13:44.761347  451031 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:13:44.761350  451031 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:13:44.761372  451031 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:13:44.761391  451031 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:13:44.761396  451031 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:13:44.761399  451031 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:13:44.761403  451031 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:13:44.761406  451031 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:13:44.761409  451031 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:13:44.761464  451031 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:13:44.761477  451031 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:13:44.761483  451031 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:13:44.761486  451031 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:13:44.761502  451031 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:13:44.761509  451031 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:13:44.761512  451031 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:13:44.761516  451031 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:13:44.761519  451031 cri.go:89] found id: ""
	I1205 06:13:44.761584  451031 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:13:44.788811  451031 out.go:203] 
	W1205 06:13:44.791721  451031 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:13:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:13:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:13:44.791745  451031 out.go:285] * 
	* 
	W1205 06:13:44.798084  451031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:13:44.801145  451031 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.37s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.264796ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003763041s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003755138s
addons_test.go:392: (dbg) Run:  kubectl --context addons-640282 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-640282 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-640282 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.421930913s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 ip
2025/12/05 06:14:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable registry --alsologtostderr -v=1: exit status 11 (268.517253ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:12.951103  452097 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:12.951894  452097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:12.951901  452097 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:12.951907  452097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:12.952252  452097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:12.952610  452097 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:12.952987  452097 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:12.952999  452097 addons.go:622] checking whether the cluster is paused
	I1205 06:14:12.953105  452097 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:12.953115  452097 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:12.954049  452097 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:12.971755  452097 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:12.971818  452097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:12.989024  452097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:13.102662  452097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:13.102760  452097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:13.135586  452097 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:13.135624  452097 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:13.135630  452097 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:13.135634  452097 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:13.135637  452097 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:13.135641  452097 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:13.135644  452097 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:13.135647  452097 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:13.135650  452097 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:13.135656  452097 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:13.135660  452097 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:13.135663  452097 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:13.135667  452097 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:13.135671  452097 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:13.135677  452097 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:13.135682  452097 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:13.135686  452097 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:13.135690  452097 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:13.135693  452097 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:13.135696  452097 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:13.135701  452097 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:13.135704  452097 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:13.135707  452097 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:13.135710  452097 cri.go:89] found id: ""
	I1205 06:14:13.135762  452097 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:13.150945  452097 out.go:203] 
	W1205 06:14:13.153820  452097 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:13.153845  452097 out.go:285] * 
	* 
	W1205 06:14:13.160229  452097 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:13.163212  452097 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.96s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.127876ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-640282
addons_test.go:332: (dbg) Run:  kubectl --context addons-640282 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (259.336682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:38.999398  453019 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:39.000250  453019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:39.000289  453019 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:39.000311  453019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:39.000592  453019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:39.000923  453019 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:39.001384  453019 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:39.001430  453019 addons.go:622] checking whether the cluster is paused
	I1205 06:14:39.001580  453019 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:39.001616  453019 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:39.002239  453019 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:39.024413  453019 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:39.024477  453019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:39.042677  453019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:39.145215  453019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:39.145320  453019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:39.178544  453019 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:39.178568  453019 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:39.178574  453019 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:39.178583  453019 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:39.178587  453019 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:39.178591  453019 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:39.178594  453019 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:39.178598  453019 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:39.178601  453019 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:39.178608  453019 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:39.178611  453019 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:39.178614  453019 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:39.178618  453019 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:39.178621  453019 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:39.178624  453019 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:39.178634  453019 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:39.178645  453019 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:39.178649  453019 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:39.178652  453019 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:39.178655  453019 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:39.178659  453019 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:39.178662  453019 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:39.178665  453019 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:39.178668  453019 cri.go:89] found id: ""
	I1205 06:14:39.178719  453019 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:39.193267  453019 out.go:203] 
	W1205 06:14:39.196183  453019 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:39.196210  453019 out.go:285] * 
	* 
	W1205 06:14:39.202524  453019 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:39.205395  453019 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-640282 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-640282 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-640282 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1ea38fd2-187c-45ec-baac-321be9312820] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1ea38fd2-187c-45ec-baac-321be9312820] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003411788s
I1205 06:14:33.476192  444147 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.104554489s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-640282 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-640282
helpers_test.go:243: (dbg) docker inspect addons-640282:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005",
	        "Created": "2025-12-05T06:11:22.483403755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 445559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:11:22.55725687Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/hostname",
	        "HostsPath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/hosts",
	        "LogPath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005-json.log",
	        "Name": "/addons-640282",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-640282:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-640282",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005",
	                "LowerDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-640282",
	                "Source": "/var/lib/docker/volumes/addons-640282/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-640282",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-640282",
	                "name.minikube.sigs.k8s.io": "addons-640282",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b02610bda796e6596fde1e86088c0cf7e74f122abeb77fe4fbf4f775488f26d5",
	            "SandboxKey": "/var/run/docker/netns/b02610bda796",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-640282": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:e5:55:99:0c:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8171e975fc82aee656b3853dc5b0661bbcc39adb7da0925bfde854ed0e4cc72",
	                    "EndpointID": "8e2f4cebdc4acd5b3219427e24bebcd06ae5f1e40f797a05a7cd7a7d70451631",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-640282",
	                        "b467876b75d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-640282 -n addons-640282
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-640282 logs -n 25: (1.540318372s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-885973                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-885973 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ --download-only -p binary-mirror-589348 --alsologtostderr --binary-mirror http://127.0.0.1:36515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-589348   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ -p binary-mirror-589348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-589348   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ addons  │ enable dashboard -p addons-640282                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-640282                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ start   │ -p addons-640282 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:13 UTC │
	│ addons  │ addons-640282 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ addons  │ addons-640282 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-640282 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ addons  │ addons-640282 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ ip      │ addons-640282 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ addons  │ addons-640282 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ addons  │ addons-640282 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ addons  │ addons-640282 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ ssh     │ addons-640282 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ addons  │ addons-640282 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ addons  │ addons-640282 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-640282                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
	│ addons  │ addons-640282 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ addons  │ addons-640282 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ addons  │ addons-640282 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │                     │
	│ ssh     │ addons-640282 ssh cat /opt/local-path-provisioner/pvc-dc2d0fe3-6c45-402d-afbe-77edefd5e6d2_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
	│ addons  │ addons-640282 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ addons  │ addons-640282 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │                     │
	│ ip      │ addons-640282 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:16 UTC │ 05 Dec 25 06:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:11:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:11:16.369775  445166 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:11:16.369970  445166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:16.370002  445166 out.go:374] Setting ErrFile to fd 2...
	I1205 06:11:16.370022  445166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:16.370305  445166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:11:16.370866  445166 out.go:368] Setting JSON to false
	I1205 06:11:16.371701  445166 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10404,"bootTime":1764904673,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:11:16.371802  445166 start.go:143] virtualization:  
	I1205 06:11:16.375212  445166 out.go:179] * [addons-640282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:11:16.378304  445166 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:11:16.378404  445166 notify.go:221] Checking for updates...
	I1205 06:11:16.383905  445166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:11:16.386751  445166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:11:16.389521  445166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:11:16.392589  445166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:11:16.395434  445166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:11:16.398522  445166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:11:16.435359  445166 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:11:16.435502  445166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:16.496570  445166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:11:16.487540451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:16.496685  445166 docker.go:319] overlay module found
	I1205 06:11:16.499680  445166 out.go:179] * Using the docker driver based on user configuration
	I1205 06:11:16.502534  445166 start.go:309] selected driver: docker
	I1205 06:11:16.502557  445166 start.go:927] validating driver "docker" against <nil>
	I1205 06:11:16.502572  445166 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:11:16.503326  445166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:16.555478  445166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:11:16.546844332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:16.555639  445166 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:11:16.555862  445166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:11:16.558667  445166 out.go:179] * Using Docker driver with root privileges
	I1205 06:11:16.561371  445166 cni.go:84] Creating CNI manager for ""
	I1205 06:11:16.561433  445166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:11:16.561445  445166 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:11:16.561515  445166 start.go:353] cluster config:
	{Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1205 06:11:16.564587  445166 out.go:179] * Starting "addons-640282" primary control-plane node in "addons-640282" cluster
	I1205 06:11:16.567379  445166 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:11:16.570183  445166 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:11:16.572963  445166 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:11:16.573034  445166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1205 06:11:16.573049  445166 cache.go:65] Caching tarball of preloaded images
	I1205 06:11:16.573053  445166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:11:16.573134  445166 preload.go:238] Found /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 06:11:16.573145  445166 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 06:11:16.573485  445166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/config.json ...
	I1205 06:11:16.573506  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/config.json: {Name:mkfabe31521d55786406320c487f12d681aef468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:16.591806  445166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:11:16.591832  445166 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 06:11:16.591852  445166 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:11:16.591882  445166 start.go:360] acquireMachinesLock for addons-640282: {Name:mk3b6d44b6b925e5bf07bbdf6658ad19c10866d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:11:16.591991  445166 start.go:364] duration metric: took 88.69µs to acquireMachinesLock for "addons-640282"
	I1205 06:11:16.592021  445166 start.go:93] Provisioning new machine with config: &{Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:11:16.592090  445166 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:11:16.595509  445166 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1205 06:11:16.595758  445166 start.go:159] libmachine.API.Create for "addons-640282" (driver="docker")
	I1205 06:11:16.595799  445166 client.go:173] LocalClient.Create starting
	I1205 06:11:16.595908  445166 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem
	I1205 06:11:17.176454  445166 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem
	I1205 06:11:17.399092  445166 cli_runner.go:164] Run: docker network inspect addons-640282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 06:11:17.416138  445166 cli_runner.go:211] docker network inspect addons-640282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 06:11:17.416228  445166 network_create.go:284] running [docker network inspect addons-640282] to gather additional debugging logs...
	I1205 06:11:17.416252  445166 cli_runner.go:164] Run: docker network inspect addons-640282
	W1205 06:11:17.434564  445166 cli_runner.go:211] docker network inspect addons-640282 returned with exit code 1
	I1205 06:11:17.434596  445166 network_create.go:287] error running [docker network inspect addons-640282]: docker network inspect addons-640282: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-640282 not found
	I1205 06:11:17.434609  445166 network_create.go:289] output of [docker network inspect addons-640282]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-640282 not found
	
	** /stderr **
	I1205 06:11:17.434701  445166 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:11:17.451053  445166 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ae9030}
	I1205 06:11:17.451096  445166 network_create.go:124] attempt to create docker network addons-640282 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 06:11:17.451156  445166 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-640282 addons-640282
	I1205 06:11:17.518259  445166 network_create.go:108] docker network addons-640282 192.168.49.0/24 created
	I1205 06:11:17.518308  445166 kic.go:121] calculated static IP "192.168.49.2" for the "addons-640282" container
	I1205 06:11:17.518468  445166 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 06:11:17.537482  445166 cli_runner.go:164] Run: docker volume create addons-640282 --label name.minikube.sigs.k8s.io=addons-640282 --label created_by.minikube.sigs.k8s.io=true
	I1205 06:11:17.555857  445166 oci.go:103] Successfully created a docker volume addons-640282
	I1205 06:11:17.555968  445166 cli_runner.go:164] Run: docker run --rm --name addons-640282-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-640282 --entrypoint /usr/bin/test -v addons-640282:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 06:11:18.449149  445166 oci.go:107] Successfully prepared a docker volume addons-640282
	I1205 06:11:18.449221  445166 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:11:18.449231  445166 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 06:11:18.449298  445166 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-640282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 06:11:22.417567  445166 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-640282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.968231094s)
	I1205 06:11:22.417604  445166 kic.go:203] duration metric: took 3.968369663s to extract preloaded images to volume ...
	W1205 06:11:22.417756  445166 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 06:11:22.417878  445166 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:11:22.467199  445166 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-640282 --name addons-640282 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-640282 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-640282 --network addons-640282 --ip 192.168.49.2 --volume addons-640282:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 06:11:22.788731  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Running}}
	I1205 06:11:22.813080  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:22.841190  445166 cli_runner.go:164] Run: docker exec addons-640282 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:11:22.892656  445166 oci.go:144] the created container "addons-640282" has a running status.
	I1205 06:11:22.892685  445166 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa...
	I1205 06:11:23.327802  445166 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:11:23.350264  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:23.366943  445166 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:11:23.366965  445166 kic_runner.go:114] Args: [docker exec --privileged addons-640282 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:11:23.407362  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:23.426449  445166 machine.go:94] provisionDockerMachine start ...
	I1205 06:11:23.426552  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:23.443279  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:23.443617  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:23.443632  445166 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:11:23.444269  445166 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45202->127.0.0.1:33133: read: connection reset by peer
	I1205 06:11:26.593993  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-640282
	
	I1205 06:11:26.594020  445166 ubuntu.go:182] provisioning hostname "addons-640282"
	I1205 06:11:26.594107  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:26.611695  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:26.612027  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:26.612043  445166 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-640282 && echo "addons-640282" | sudo tee /etc/hostname
	I1205 06:11:26.771676  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-640282
	
	I1205 06:11:26.771789  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:26.788484  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:26.788805  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:26.788831  445166 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-640282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-640282/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-640282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:11:26.938707  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:11:26.938734  445166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:11:26.938760  445166 ubuntu.go:190] setting up certificates
	I1205 06:11:26.938770  445166 provision.go:84] configureAuth start
	I1205 06:11:26.938836  445166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-640282
	I1205 06:11:26.955736  445166 provision.go:143] copyHostCerts
	I1205 06:11:26.955825  445166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:11:26.955968  445166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:11:26.956044  445166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:11:26.956107  445166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.addons-640282 san=[127.0.0.1 192.168.49.2 addons-640282 localhost minikube]
	I1205 06:11:27.335721  445166 provision.go:177] copyRemoteCerts
	I1205 06:11:27.335802  445166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:11:27.335853  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:27.353058  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:27.458434  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:11:27.475924  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 06:11:27.493804  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 06:11:27.512020  445166 provision.go:87] duration metric: took 573.233315ms to configureAuth
	I1205 06:11:27.512051  445166 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:11:27.512242  445166 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:11:27.512356  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:27.530211  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:27.530556  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:27.530577  445166 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:11:28.037414  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:11:28.037439  445166 machine.go:97] duration metric: took 4.610959338s to provisionDockerMachine
	I1205 06:11:28.037450  445166 client.go:176] duration metric: took 11.441641191s to LocalClient.Create
	I1205 06:11:28.037463  445166 start.go:167] duration metric: took 11.44170699s to libmachine.API.Create "addons-640282"
	I1205 06:11:28.037470  445166 start.go:293] postStartSetup for "addons-640282" (driver="docker")
	I1205 06:11:28.037481  445166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:11:28.037553  445166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:11:28.037604  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.055522  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.162242  445166 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:11:28.165325  445166 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:11:28.165358  445166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:11:28.165370  445166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:11:28.165449  445166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:11:28.165477  445166 start.go:296] duration metric: took 128.001024ms for postStartSetup
	I1205 06:11:28.165787  445166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-640282
	I1205 06:11:28.182748  445166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/config.json ...
	I1205 06:11:28.183037  445166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:11:28.183096  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.198893  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.299231  445166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:11:28.303779  445166 start.go:128] duration metric: took 11.711673025s to createHost
	I1205 06:11:28.303803  445166 start.go:83] releasing machines lock for "addons-640282", held for 11.711798393s
	I1205 06:11:28.303875  445166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-640282
	I1205 06:11:28.320568  445166 ssh_runner.go:195] Run: cat /version.json
	I1205 06:11:28.320622  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.320630  445166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:11:28.320722  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.339107  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.339005  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.437934  445166 ssh_runner.go:195] Run: systemctl --version
	I1205 06:11:28.527956  445166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:11:28.569451  445166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:11:28.575021  445166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:11:28.575098  445166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:11:28.603359  445166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 06:11:28.603385  445166 start.go:496] detecting cgroup driver to use...
	I1205 06:11:28.603418  445166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:11:28.603468  445166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:11:28.621615  445166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:11:28.634577  445166 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:11:28.634649  445166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:11:28.652661  445166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:11:28.671405  445166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:11:28.789192  445166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:11:28.908748  445166 docker.go:234] disabling docker service ...
	I1205 06:11:28.908866  445166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:11:28.930094  445166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:11:28.943461  445166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:11:29.057995  445166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:11:29.181548  445166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:11:29.193544  445166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:11:29.207788  445166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:11:29.207855  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.216651  445166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:11:29.216729  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.226004  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.235272  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.244180  445166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:11:29.252443  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.261353  445166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.274828  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.283570  445166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:11:29.290819  445166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:11:29.298171  445166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:11:29.411464  445166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:11:29.581960  445166 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:11:29.582102  445166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:11:29.586505  445166 start.go:564] Will wait 60s for crictl version
	I1205 06:11:29.586618  445166 ssh_runner.go:195] Run: which crictl
	I1205 06:11:29.589997  445166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:11:29.627308  445166 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:11:29.627480  445166 ssh_runner.go:195] Run: crio --version
	I1205 06:11:29.659181  445166 ssh_runner.go:195] Run: crio --version
	I1205 06:11:29.692016  445166 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 06:11:29.695080  445166 cli_runner.go:164] Run: docker network inspect addons-640282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:11:29.711113  445166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:11:29.714784  445166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:11:29.724841  445166 kubeadm.go:884] updating cluster {Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:11:29.724953  445166 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:11:29.725013  445166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:11:29.763299  445166 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:11:29.763325  445166 crio.go:433] Images already preloaded, skipping extraction
	I1205 06:11:29.763422  445166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:11:29.788193  445166 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:11:29.788214  445166 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:11:29.788223  445166 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1205 06:11:29.788310  445166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-640282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:11:29.788395  445166 ssh_runner.go:195] Run: crio config
	I1205 06:11:29.859554  445166 cni.go:84] Creating CNI manager for ""
	I1205 06:11:29.859582  445166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:11:29.859603  445166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:11:29.859664  445166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-640282 NodeName:addons-640282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:11:29.859805  445166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-640282"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:11:29.859890  445166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 06:11:29.867780  445166 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:11:29.867854  445166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:11:29.876061  445166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 06:11:29.889911  445166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:11:29.903308  445166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1205 06:11:29.916745  445166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:11:29.920487  445166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:11:29.930362  445166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:11:30.084706  445166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:11:30.103961  445166 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282 for IP: 192.168.49.2
	I1205 06:11:30.104035  445166 certs.go:195] generating shared ca certs ...
	I1205 06:11:30.104067  445166 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.104275  445166 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:11:30.258433  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt ...
	I1205 06:11:30.258469  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt: {Name:mkc2f548ae0c6064e6a11ca99f32f9d80c761c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.258672  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key ...
	I1205 06:11:30.258684  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key: {Name:mk8b83813271d8b8513855033f159e0bd161be36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.258771  445166 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:11:30.363081  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt ...
	I1205 06:11:30.363107  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt: {Name:mk659828d6931ed8fef790edec4dd3c58c1614a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.363264  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key ...
	I1205 06:11:30.363277  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key: {Name:mk9ffcb7450ee6a545b8f33867c7db50ce955e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.363357  445166 certs.go:257] generating profile certs ...
	I1205 06:11:30.363413  445166 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.key
	I1205 06:11:30.363428  445166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt with IP's: []
	I1205 06:11:30.799471  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt ...
	I1205 06:11:30.799501  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: {Name:mk903c818a43bab9c5ec892bf6c703541a485eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.799681  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.key ...
	I1205 06:11:30.799695  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.key: {Name:mk79279a33252ca16bf2f611f598296c5131cab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.799768  445166 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b
	I1205 06:11:30.799791  445166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 06:11:30.882320  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b ...
	I1205 06:11:30.882350  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b: {Name:mkab35c912beeffc9cb6b43be7cd3582198691c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.882554  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b ...
	I1205 06:11:30.882569  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b: {Name:mk1b086178b0ae040c441f1e9e1a10caad913a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.882658  445166 certs.go:382] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt
	I1205 06:11:30.882743  445166 certs.go:386] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key
	I1205 06:11:30.882801  445166 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key
	I1205 06:11:30.882828  445166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt with IP's: []
	I1205 06:11:30.944941  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt ...
	I1205 06:11:30.944972  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt: {Name:mk6e6f618554910879efe0dd57b84948ea2e4685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.945161  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key ...
	I1205 06:11:30.945181  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key: {Name:mk9d46bb0f45e8838f8bb4c5d7f69899789a9817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.945369  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:11:30.945417  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:11:30.945448  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:11:30.945483  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:11:30.946054  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:11:30.965590  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:11:30.985407  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:11:31.004188  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:11:31.024392  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 06:11:31.041926  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:11:31.060710  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:11:31.079262  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:11:31.098776  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:11:31.118174  445166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:11:31.131663  445166 ssh_runner.go:195] Run: openssl version
	I1205 06:11:31.138635  445166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.146510  445166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:11:31.154529  445166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.158818  445166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.158931  445166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.200028  445166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:11:31.207501  445166 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:11:31.214800  445166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:11:31.218361  445166 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:11:31.218450  445166 kubeadm.go:401] StartCluster: {Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:11:31.218582  445166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:11:31.218656  445166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:11:31.245116  445166 cri.go:89] found id: ""
	I1205 06:11:31.245188  445166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:11:31.252673  445166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:11:31.260250  445166 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:11:31.260334  445166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:11:31.267783  445166 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:11:31.267849  445166 kubeadm.go:158] found existing configuration files:
	
	I1205 06:11:31.267913  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 06:11:31.275983  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:11:31.276079  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:11:31.283224  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 06:11:31.290980  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:11:31.291045  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:11:31.298146  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 06:11:31.306021  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:11:31.306136  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:11:31.313393  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 06:11:31.320898  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:11:31.320985  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:11:31.328464  445166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:11:31.364887  445166 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 06:11:31.365195  445166 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:11:31.390108  445166 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:11:31.390184  445166 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:11:31.390223  445166 kubeadm.go:319] OS: Linux
	I1205 06:11:31.390272  445166 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:11:31.390335  445166 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:11:31.390423  445166 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:11:31.390477  445166 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:11:31.390527  445166 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:11:31.390578  445166 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:11:31.390632  445166 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:11:31.390684  445166 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:11:31.390734  445166 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:11:31.475451  445166 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:11:31.475568  445166 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:11:31.475665  445166 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:11:31.495474  445166 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:11:31.498477  445166 out.go:252]   - Generating certificates and keys ...
	I1205 06:11:31.498572  445166 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:11:31.498642  445166 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:11:31.975093  445166 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:11:32.107640  445166 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:11:32.221442  445166 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:11:32.808177  445166 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:11:33.471655  445166 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:11:33.471800  445166 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-640282 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:11:33.644154  445166 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:11:33.644311  445166 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-640282 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:11:34.315678  445166 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:11:34.897200  445166 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:11:35.464278  445166 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:11:35.464593  445166 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:11:36.212861  445166 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:11:36.456854  445166 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:11:36.925584  445166 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:11:37.862726  445166 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:11:38.683842  445166 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:11:38.684689  445166 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:11:38.687424  445166 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:11:38.690865  445166 out.go:252]   - Booting up control plane ...
	I1205 06:11:38.690976  445166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:11:38.691053  445166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:11:38.691428  445166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:11:38.708841  445166 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:11:38.709128  445166 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:11:38.716847  445166 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:11:38.717255  445166 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:11:38.717492  445166 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:11:38.850898  445166 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:11:38.851019  445166 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:11:39.853911  445166 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00178822s
	I1205 06:11:39.854571  445166 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 06:11:39.854672  445166 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1205 06:11:39.854765  445166 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 06:11:39.854847  445166 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 06:11:44.514306  445166 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.658727217s
	I1205 06:11:45.793971  445166 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.939381352s
	I1205 06:11:46.855985  445166 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001369358s
	I1205 06:11:46.888437  445166 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 06:11:46.903478  445166 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 06:11:46.920307  445166 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 06:11:46.920759  445166 kubeadm.go:319] [mark-control-plane] Marking the node addons-640282 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 06:11:46.934981  445166 kubeadm.go:319] [bootstrap-token] Using token: 3i7si6.iz5o6rwdyxx3bnn8
	I1205 06:11:46.937907  445166 out.go:252]   - Configuring RBAC rules ...
	I1205 06:11:46.938036  445166 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 06:11:46.944044  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 06:11:46.951833  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 06:11:46.958061  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 06:11:46.961892  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 06:11:46.965763  445166 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 06:11:47.265073  445166 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 06:11:47.687824  445166 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 06:11:48.262566  445166 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 06:11:48.263806  445166 kubeadm.go:319] 
	I1205 06:11:48.263891  445166 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 06:11:48.263905  445166 kubeadm.go:319] 
	I1205 06:11:48.263984  445166 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 06:11:48.263993  445166 kubeadm.go:319] 
	I1205 06:11:48.264018  445166 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 06:11:48.264079  445166 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 06:11:48.264133  445166 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 06:11:48.264143  445166 kubeadm.go:319] 
	I1205 06:11:48.264198  445166 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 06:11:48.264206  445166 kubeadm.go:319] 
	I1205 06:11:48.264254  445166 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 06:11:48.264263  445166 kubeadm.go:319] 
	I1205 06:11:48.264314  445166 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 06:11:48.264393  445166 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 06:11:48.264465  445166 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 06:11:48.264472  445166 kubeadm.go:319] 
	I1205 06:11:48.264556  445166 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 06:11:48.264637  445166 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 06:11:48.264645  445166 kubeadm.go:319] 
	I1205 06:11:48.264729  445166 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3i7si6.iz5o6rwdyxx3bnn8 \
	I1205 06:11:48.264836  445166 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d5281293bcbe7e3015ce386b372a929210d99fe4b4fbe4b31e7ad560f07d8f20 \
	I1205 06:11:48.264859  445166 kubeadm.go:319] 	--control-plane 
	I1205 06:11:48.264867  445166 kubeadm.go:319] 
	I1205 06:11:48.264952  445166 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 06:11:48.264961  445166 kubeadm.go:319] 
	I1205 06:11:48.265044  445166 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3i7si6.iz5o6rwdyxx3bnn8 \
	I1205 06:11:48.265150  445166 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d5281293bcbe7e3015ce386b372a929210d99fe4b4fbe4b31e7ad560f07d8f20 
	I1205 06:11:48.267978  445166 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 06:11:48.268205  445166 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:11:48.268312  445166 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:11:48.268333  445166 cni.go:84] Creating CNI manager for ""
	I1205 06:11:48.268341  445166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:11:48.271492  445166 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 06:11:48.274459  445166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 06:11:48.278706  445166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 06:11:48.278727  445166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 06:11:48.292031  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 06:11:48.578604  445166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 06:11:48.578678  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:48.578795  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-640282 minikube.k8s.io/updated_at=2025_12_05T06_11_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=addons-640282 minikube.k8s.io/primary=true
	I1205 06:11:48.694263  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:48.767919  445166 ops.go:34] apiserver oom_adj: -16
	I1205 06:11:49.194466  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:49.694347  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:50.194399  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:50.695008  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:51.195236  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:51.694529  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:52.194848  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:52.295493  445166 kubeadm.go:1114] duration metric: took 3.716870526s to wait for elevateKubeSystemPrivileges
	I1205 06:11:52.295521  445166 kubeadm.go:403] duration metric: took 21.077074658s to StartCluster
	I1205 06:11:52.295539  445166 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:52.295658  445166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:11:52.296114  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:52.296307  445166 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:11:52.296444  445166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 06:11:52.296700  445166 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:11:52.296739  445166 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 06:11:52.296811  445166 addons.go:70] Setting yakd=true in profile "addons-640282"
	I1205 06:11:52.296828  445166 addons.go:239] Setting addon yakd=true in "addons-640282"
	I1205 06:11:52.296849  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.297365  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.297615  445166 addons.go:70] Setting inspektor-gadget=true in profile "addons-640282"
	I1205 06:11:52.297634  445166 addons.go:239] Setting addon inspektor-gadget=true in "addons-640282"
	I1205 06:11:52.297658  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.298090  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.298316  445166 addons.go:70] Setting metrics-server=true in profile "addons-640282"
	I1205 06:11:52.298339  445166 addons.go:239] Setting addon metrics-server=true in "addons-640282"
	I1205 06:11:52.298368  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.298841  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.299142  445166 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-640282"
	I1205 06:11:52.299170  445166 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-640282"
	I1205 06:11:52.299193  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.299594  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.306539  445166 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-640282"
	I1205 06:11:52.306575  445166 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-640282"
	I1205 06:11:52.306617  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.307089  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.307936  445166 addons.go:70] Setting cloud-spanner=true in profile "addons-640282"
	I1205 06:11:52.307964  445166 addons.go:239] Setting addon cloud-spanner=true in "addons-640282"
	I1205 06:11:52.307996  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.308423  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.308883  445166 addons.go:70] Setting registry=true in profile "addons-640282"
	I1205 06:11:52.308905  445166 addons.go:239] Setting addon registry=true in "addons-640282"
	I1205 06:11:52.308930  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.309354  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.316350  445166 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-640282"
	I1205 06:11:52.316426  445166 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-640282"
	I1205 06:11:52.316457  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.316915  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.326520  445166 addons.go:70] Setting registry-creds=true in profile "addons-640282"
	I1205 06:11:52.326552  445166 addons.go:239] Setting addon registry-creds=true in "addons-640282"
	I1205 06:11:52.326588  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.327056  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.332502  445166 addons.go:70] Setting default-storageclass=true in profile "addons-640282"
	I1205 06:11:52.332549  445166 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-640282"
	I1205 06:11:52.332918  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.348125  445166 addons.go:70] Setting gcp-auth=true in profile "addons-640282"
	I1205 06:11:52.348162  445166 mustload.go:66] Loading cluster: addons-640282
	I1205 06:11:52.348377  445166 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:11:52.351375  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.354596  445166 addons.go:70] Setting storage-provisioner=true in profile "addons-640282"
	I1205 06:11:52.354626  445166 addons.go:239] Setting addon storage-provisioner=true in "addons-640282"
	I1205 06:11:52.354660  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.355143  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.397956  445166 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-640282"
	I1205 06:11:52.397991  445166 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-640282"
	I1205 06:11:52.398344  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.408590  445166 addons.go:70] Setting ingress=true in profile "addons-640282"
	I1205 06:11:52.408628  445166 addons.go:239] Setting addon ingress=true in "addons-640282"
	I1205 06:11:52.408679  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.409178  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.424885  445166 addons.go:70] Setting ingress-dns=true in profile "addons-640282"
	I1205 06:11:52.424923  445166 addons.go:239] Setting addon ingress-dns=true in "addons-640282"
	I1205 06:11:52.424964  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.425005  445166 addons.go:70] Setting volcano=true in profile "addons-640282"
	I1205 06:11:52.425036  445166 addons.go:239] Setting addon volcano=true in "addons-640282"
	I1205 06:11:52.425063  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.425466  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.425474  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.435433  445166 out.go:179] * Verifying Kubernetes components...
	I1205 06:11:52.438536  445166 addons.go:70] Setting volumesnapshots=true in profile "addons-640282"
	I1205 06:11:52.438574  445166 addons.go:239] Setting addon volumesnapshots=true in "addons-640282"
	I1205 06:11:52.438622  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.439079  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.465658  445166 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 06:11:52.468533  445166 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1205 06:11:52.494020  445166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:11:52.494304  445166 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 06:11:52.501806  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 06:11:52.501882  445166 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 06:11:52.501982  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.513039  445166 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-640282"
	I1205 06:11:52.513087  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.513545  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.518269  445166 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1205 06:11:52.519172  445166 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:11:52.519221  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 06:11:52.519335  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.531204  445166 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1205 06:11:52.531494  445166 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:11:52.531542  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 06:11:52.531644  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.537912  445166 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1205 06:11:52.541624  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 06:11:52.541644  445166 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 06:11:52.541706  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.565182  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 06:11:52.566648  445166 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1205 06:11:52.566673  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 06:11:52.566738  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.569829  445166 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1205 06:11:52.573081  445166 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:11:52.573156  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1205 06:11:52.573306  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.606669  445166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 06:11:52.606818  445166 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:11:52.606829  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1205 06:11:52.606884  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.612426  445166 addons.go:239] Setting addon default-storageclass=true in "addons-640282"
	I1205 06:11:52.612467  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.612870  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.617891  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.621548  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	W1205 06:11:52.621949  445166 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 06:11:52.622862  445166 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:11:52.623923  445166 out.go:179]   - Using image docker.io/registry:3.0.0
	I1205 06:11:52.641050  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 06:11:52.647387  445166 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:11:52.647411  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:11:52.647478  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.651737  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.654892  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:11:52.655563  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 06:11:52.655730  445166 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1205 06:11:52.661744  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1205 06:11:52.663489  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 06:11:52.663514  445166 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 06:11:52.663598  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.679162  445166 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1205 06:11:52.682547  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 06:11:52.682932  445166 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:11:52.682945  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 06:11:52.683044  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.689672  445166 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 06:11:52.689739  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 06:11:52.689834  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.709004  445166 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:11:52.709029  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1205 06:11:52.709099  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.714680  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.715161  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.715704  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 06:11:52.722532  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 06:11:52.726047  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 06:11:52.732974  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 06:11:52.736435  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 06:11:52.742734  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 06:11:52.742768  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 06:11:52.742848  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.771644  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.806923  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.811340  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.811832  445166 out.go:179]   - Using image docker.io/busybox:stable
	I1205 06:11:52.815898  445166 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 06:11:52.818964  445166 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:11:52.818985  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 06:11:52.819049  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.820654  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.841525  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.868481  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.890502  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.898803  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.910052  445166 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:11:52.910073  445166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:11:52.910131  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.918526  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.920436  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.946926  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.957113  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	W1205 06:11:52.960625  445166 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:11:52.960659  445166 retry.go:31] will retry after 344.36104ms: ssh: handshake failed: EOF
	W1205 06:11:52.960730  445166 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:11:52.960738  445166 retry.go:31] will retry after 186.819772ms: ssh: handshake failed: EOF
	I1205 06:11:53.048252  445166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1205 06:11:53.149107  445166 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:11:53.149141  445166 retry.go:31] will retry after 495.766296ms: ssh: handshake failed: EOF
	I1205 06:11:53.544713  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:11:53.554182  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:11:53.555987  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:11:53.559318  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:11:53.562864  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 06:11:53.562929  445166 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 06:11:53.721940  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 06:11:53.732935  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:11:53.735892  445166 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 06:11:53.735957  445166 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 06:11:53.763261  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 06:11:53.763325  445166 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 06:11:53.938310  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:11:53.949740  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:11:53.953931  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 06:11:53.953955  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 06:11:53.958064  445166 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 06:11:53.958090  445166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 06:11:53.960836  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 06:11:53.960859  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 06:11:54.040893  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 06:11:54.040930  445166 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 06:11:54.043719  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 06:11:54.043744  445166 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 06:11:54.046659  445166 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:11:54.046681  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 06:11:54.118424  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 06:11:54.118480  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 06:11:54.164189  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:11:54.214426  445166 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 06:11:54.214472  445166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 06:11:54.222088  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:11:54.266455  445166 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.65975521s)
	I1205 06:11:54.266484  445166 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 06:11:54.266364  445166 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.218073401s)
	I1205 06:11:54.267956  445166 node_ready.go:35] waiting up to 6m0s for node "addons-640282" to be "Ready" ...
	I1205 06:11:54.320993  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 06:11:54.321023  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 06:11:54.322251  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:11:54.322275  445166 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 06:11:54.376284  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:11:54.376309  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 06:11:54.379673  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:11:54.489487  445166 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 06:11:54.489516  445166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 06:11:54.526310  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:11:54.574905  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:11:54.593215  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 06:11:54.593252  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 06:11:54.704295  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 06:11:54.704330  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 06:11:54.776375  445166 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-640282" context rescaled to 1 replicas
	I1205 06:11:54.843164  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 06:11:54.843194  445166 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 06:11:54.933926  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.389124911s)
	I1205 06:11:54.939398  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 06:11:54.939430  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 06:11:55.057846  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.503578724s)
	I1205 06:11:55.090522  445166 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:11:55.090556  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 06:11:55.123243  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 06:11:55.123266  445166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 06:11:55.139941  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 06:11:55.139966  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 06:11:55.156141  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 06:11:55.156178  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 06:11:55.181909  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:11:55.251653  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.695581145s)
	I1205 06:11:55.308205  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:11:55.308232  445166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 06:11:55.555846  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1205 06:11:56.283711  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:11:58.673843  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.951854418s)
	I1205 06:11:58.673997  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.941001908s)
	I1205 06:11:58.674019  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.114644447s)
	I1205 06:11:58.674129  445166 addons.go:495] Verifying addon ingress=true in "addons-640282"
	I1205 06:11:58.674211  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.724440437s)
	I1205 06:11:58.674678  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.492733676s)
	W1205 06:11:58.674718  445166 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:11:58.674747  445166 retry.go:31] will retry after 265.530233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:11:58.674334  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.51012143s)
	I1205 06:11:58.674356  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.452245226s)
	I1205 06:11:58.675034  445166 addons.go:495] Verifying addon registry=true in "addons-640282"
	I1205 06:11:58.674460  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.294763358s)
	I1205 06:11:58.674523  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.148185044s)
	I1205 06:11:58.675474  445166 addons.go:495] Verifying addon metrics-server=true in "addons-640282"
	I1205 06:11:58.674558  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.099611139s)
	I1205 06:11:58.674063  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.735731417s)
	I1205 06:11:58.678494  445166 out.go:179] * Verifying ingress addon...
	I1205 06:11:58.680333  445166 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-640282 service yakd-dashboard -n yakd-dashboard
	
	I1205 06:11:58.680409  445166 out.go:179] * Verifying registry addon...
	I1205 06:11:58.683425  445166 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 06:11:58.685297  445166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 06:11:58.692390  445166 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 06:11:58.692419  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:11:58.694268  445166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:11:58.694294  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1205 06:11:58.697691  445166 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	W1205 06:11:58.771820  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:11:58.914849  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.358950886s)
	I1205 06:11:58.914894  445166 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-640282"
	I1205 06:11:58.918107  445166 out.go:179] * Verifying csi-hostpath-driver addon...
	I1205 06:11:58.922024  445166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 06:11:58.935567  445166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:11:58.935610  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:11:58.940576  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:11:59.187765  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:11:59.188771  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:11:59.427310  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:11:59.689599  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:11:59.690081  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:11:59.925377  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:00.189713  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:00.234650  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:00.291026  445166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 06:12:00.291202  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:12:00.342095  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:12:00.426644  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:00.468633  445166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 06:12:00.485573  445166 addons.go:239] Setting addon gcp-auth=true in "addons-640282"
	I1205 06:12:00.485673  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:12:00.486254  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:12:00.505900  445166 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 06:12:00.505956  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:12:00.525689  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:12:00.687356  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:00.688264  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:00.925474  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:01.186989  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:01.189472  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1205 06:12:01.273512  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:01.426938  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:01.630178  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.689550114s)
	I1205 06:12:01.630327  445166 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.124405311s)
	I1205 06:12:01.633304  445166 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 06:12:01.636203  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:12:01.639004  445166 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 06:12:01.639034  445166 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 06:12:01.653438  445166 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 06:12:01.653521  445166 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 06:12:01.667101  445166 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:12:01.667126  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 06:12:01.682968  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:12:01.689978  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:01.691003  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:01.925349  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:02.202421  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:02.202913  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:02.234466  445166 addons.go:495] Verifying addon gcp-auth=true in "addons-640282"
	I1205 06:12:02.237635  445166 out.go:179] * Verifying gcp-auth addon...
	I1205 06:12:02.241216  445166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 06:12:02.295955  445166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 06:12:02.295978  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:02.425089  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:02.686535  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:02.688752  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:02.744548  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:02.925544  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:03.187916  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:03.189331  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:03.243885  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:03.425505  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:03.689114  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:03.689917  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:03.745012  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:03.770891  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:03.925103  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:04.187130  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:04.187348  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:04.244354  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:04.426212  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:04.686707  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:04.688437  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:04.744346  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:04.925168  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:05.187244  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:05.188321  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:05.244198  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:05.425764  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:05.687233  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:05.689278  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:05.745145  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:05.925696  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:06.188376  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:06.188446  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:06.244462  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:06.271275  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:06.425308  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:06.686540  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:06.688724  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:06.745063  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:06.925961  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:07.187315  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:07.194612  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:07.244047  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:07.425869  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:07.687024  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:07.688907  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:07.744754  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:07.925702  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:08.188172  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:08.188968  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:08.244950  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:08.426196  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:08.688464  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:08.689420  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:08.744107  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:08.770903  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:08.924694  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:09.186574  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:09.188225  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:09.244048  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:09.424818  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:09.687728  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:09.687902  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:09.746650  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:09.924892  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:10.188280  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:10.188485  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:10.244252  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:10.425680  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:10.686837  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:10.689197  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:10.744173  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:10.925778  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:11.187904  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:11.188093  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:11.244787  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:11.271540  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:11.425371  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:11.688223  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:11.688458  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:11.744238  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:11.924688  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:12.186806  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:12.188569  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:12.244511  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:12.425299  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:12.686672  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:12.688546  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:12.744498  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:12.925188  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:13.187141  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:13.187574  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:13.244292  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:13.425856  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:13.689465  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:13.689700  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:12:13.771026  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:13.789231  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:13.924935  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:14.187611  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:14.189070  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:14.244782  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:14.430836  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:14.686970  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:14.688949  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:14.744814  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:14.925364  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:15.186564  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:15.188631  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:15.244696  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:15.425646  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:15.687398  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:15.689428  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:15.744417  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:15.771196  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:15.925416  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:16.186345  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:16.188538  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:16.244461  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:16.425725  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:16.688147  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:16.689234  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:16.745035  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:16.925906  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:17.186944  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:17.188706  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:17.244579  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:17.425143  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:17.687573  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:17.689453  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:17.744325  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:17.771367  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:17.925041  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:18.186971  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:18.188411  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:18.244299  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:18.425898  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:18.687147  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:18.687635  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:18.747352  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:18.926133  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:19.187620  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:19.188546  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:19.244121  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:19.425670  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:19.689130  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:19.689613  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:19.744529  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:19.771617  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:19.925643  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:20.188113  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:20.189342  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:20.243928  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:20.426120  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:20.687160  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:20.688196  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:20.745052  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:20.925793  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:21.187964  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:21.189151  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:21.251285  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:21.425026  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:21.688218  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:21.688532  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:21.744039  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:21.924761  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:22.186310  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:22.188758  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:22.244658  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:22.271419  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:22.425510  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:22.687150  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:22.688886  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:22.744856  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:22.925166  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:23.187142  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:23.188685  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:23.244383  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:23.425372  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:23.686931  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:23.688682  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:23.744612  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:23.925379  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:24.187030  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:24.188435  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:24.244526  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:24.271582  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:24.426962  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:24.687417  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:24.688062  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:24.744716  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:24.925901  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:25.186802  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:25.188953  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:25.244825  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:25.425356  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:25.687270  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:25.688704  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:25.744524  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:25.925446  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:26.187453  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:26.188588  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:26.244569  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:26.426021  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:26.687809  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:26.688737  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:26.744573  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:26.771730  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:26.925670  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:27.187093  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:27.189149  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:27.244939  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:27.424955  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:27.688472  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:27.688939  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:27.744751  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:27.925674  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:28.188274  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:28.190160  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:28.244087  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:28.425964  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:28.688663  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:28.688740  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:28.745091  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:28.924888  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:29.188077  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:29.188216  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:29.245015  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:29.271848  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:29.425555  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:29.686623  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:29.703810  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:29.745086  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:29.925808  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:30.187784  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:30.189728  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:30.244660  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:30.425607  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:30.686636  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:30.688368  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:30.745028  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:30.925280  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:31.188110  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:31.188164  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:31.245063  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:31.425057  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:31.687117  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:31.688527  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:31.744358  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:31.770974  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:31.925181  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:32.188130  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:32.188341  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:32.244008  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:32.425738  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:32.687609  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:32.689345  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:32.744417  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:32.925014  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:33.187149  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:33.187951  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:33.245233  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:33.424913  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:33.687233  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:33.688337  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:33.744123  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:33.925852  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:34.230985  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:34.243048  445166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:12:34.243071  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:34.256453  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:34.276457  445166 node_ready.go:49] node "addons-640282" is "Ready"
	I1205 06:12:34.276487  445166 node_ready.go:38] duration metric: took 40.008506714s for node "addons-640282" to be "Ready" ...
	I1205 06:12:34.276503  445166 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:12:34.276565  445166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:12:34.305036  445166 api_server.go:72] duration metric: took 42.008680166s to wait for apiserver process to appear ...
	I1205 06:12:34.305062  445166 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:12:34.305080  445166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 06:12:34.315177  445166 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 06:12:34.316864  445166 api_server.go:141] control plane version: v1.34.2
	I1205 06:12:34.316892  445166 api_server.go:131] duration metric: took 11.824403ms to wait for apiserver health ...
	I1205 06:12:34.316902  445166 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:12:34.327504  445166 system_pods.go:59] 19 kube-system pods found
	I1205 06:12:34.327543  445166 system_pods.go:61] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:34.327551  445166 system_pods.go:61] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending
	I1205 06:12:34.327557  445166 system_pods.go:61] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending
	I1205 06:12:34.327561  445166 system_pods.go:61] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:34.327566  445166 system_pods.go:61] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:34.327569  445166 system_pods.go:61] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:34.327573  445166 system_pods.go:61] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:34.327578  445166 system_pods.go:61] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:34.327582  445166 system_pods.go:61] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending
	I1205 06:12:34.327585  445166 system_pods.go:61] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:34.327589  445166 system_pods.go:61] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:34.327593  445166 system_pods.go:61] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending
	I1205 06:12:34.327600  445166 system_pods.go:61] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending
	I1205 06:12:34.327603  445166 system_pods.go:61] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending
	I1205 06:12:34.327609  445166 system_pods.go:61] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:34.327619  445166 system_pods.go:61] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending
	I1205 06:12:34.327623  445166 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending
	I1205 06:12:34.327628  445166 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending
	I1205 06:12:34.327637  445166 system_pods.go:61] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending
	I1205 06:12:34.327642  445166 system_pods.go:74] duration metric: took 10.735719ms to wait for pod list to return data ...
	I1205 06:12:34.327650  445166 default_sa.go:34] waiting for default service account to be created ...
	I1205 06:12:34.332676  445166 default_sa.go:45] found service account: "default"
	I1205 06:12:34.332702  445166 default_sa.go:55] duration metric: took 5.043176ms for default service account to be created ...
	I1205 06:12:34.332711  445166 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 06:12:34.366121  445166 system_pods.go:86] 19 kube-system pods found
	I1205 06:12:34.366157  445166 system_pods.go:89] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:34.366164  445166 system_pods.go:89] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending
	I1205 06:12:34.366169  445166 system_pods.go:89] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending
	I1205 06:12:34.366173  445166 system_pods.go:89] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:34.366178  445166 system_pods.go:89] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:34.366183  445166 system_pods.go:89] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:34.366188  445166 system_pods.go:89] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:34.366192  445166 system_pods.go:89] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:34.366200  445166 system_pods.go:89] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending
	I1205 06:12:34.366204  445166 system_pods.go:89] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:34.366212  445166 system_pods.go:89] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:34.366217  445166 system_pods.go:89] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending
	I1205 06:12:34.366226  445166 system_pods.go:89] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending
	I1205 06:12:34.366230  445166 system_pods.go:89] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending
	I1205 06:12:34.366236  445166 system_pods.go:89] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:34.366244  445166 system_pods.go:89] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending
	I1205 06:12:34.366249  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending
	I1205 06:12:34.366253  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending
	I1205 06:12:34.366257  445166 system_pods.go:89] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending
	I1205 06:12:34.366273  445166 retry.go:31] will retry after 204.834201ms: missing components: kube-dns
	I1205 06:12:34.436267  445166 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:12:34.436302  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:34.661676  445166 system_pods.go:86] 19 kube-system pods found
	I1205 06:12:34.661716  445166 system_pods.go:89] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:34.661724  445166 system_pods.go:89] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending
	I1205 06:12:34.661730  445166 system_pods.go:89] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending
	I1205 06:12:34.661734  445166 system_pods.go:89] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:34.661738  445166 system_pods.go:89] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:34.661743  445166 system_pods.go:89] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:34.661748  445166 system_pods.go:89] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:34.661754  445166 system_pods.go:89] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:34.661761  445166 system_pods.go:89] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending
	I1205 06:12:34.661765  445166 system_pods.go:89] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:34.661772  445166 system_pods.go:89] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:34.661776  445166 system_pods.go:89] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending
	I1205 06:12:34.661780  445166 system_pods.go:89] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending
	I1205 06:12:34.661786  445166 system_pods.go:89] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:12:34.661798  445166 system_pods.go:89] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:34.661805  445166 system_pods.go:89] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending
	I1205 06:12:34.661815  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending
	I1205 06:12:34.661822  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:12:34.661826  445166 system_pods.go:89] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending
	I1205 06:12:34.661843  445166 retry.go:31] will retry after 329.681398ms: missing components: kube-dns
	I1205 06:12:34.687599  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:34.689024  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:34.746601  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:34.950436  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:35.023931  445166 system_pods.go:86] 19 kube-system pods found
	I1205 06:12:35.023971  445166 system_pods.go:89] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:35.023982  445166 system_pods.go:89] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:12:35.023991  445166 system_pods.go:89] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:12:35.023996  445166 system_pods.go:89] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:35.024002  445166 system_pods.go:89] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:35.024006  445166 system_pods.go:89] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:35.024012  445166 system_pods.go:89] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:35.024021  445166 system_pods.go:89] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:35.024029  445166 system_pods.go:89] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:12:35.024039  445166 system_pods.go:89] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:35.024044  445166 system_pods.go:89] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:35.024050  445166 system_pods.go:89] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:12:35.024062  445166 system_pods.go:89] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:12:35.024067  445166 system_pods.go:89] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:12:35.024073  445166 system_pods.go:89] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:35.024083  445166 system_pods.go:89] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:12:35.024089  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:12:35.024096  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:12:35.024102  445166 system_pods.go:89] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:12:35.024112  445166 system_pods.go:126] duration metric: took 691.395496ms to wait for k8s-apps to be running ...
	I1205 06:12:35.024153  445166 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 06:12:35.024223  445166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:12:35.056582  445166 system_svc.go:56] duration metric: took 32.421216ms WaitForService to wait for kubelet
	I1205 06:12:35.056651  445166 kubeadm.go:587] duration metric: took 42.760298438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:12:35.056684  445166 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:12:35.065822  445166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 06:12:35.065898  445166 node_conditions.go:123] node cpu capacity is 2
	I1205 06:12:35.065940  445166 node_conditions.go:105] duration metric: took 9.232337ms to run NodePressure ...
	I1205 06:12:35.065966  445166 start.go:242] waiting for startup goroutines ...
	I1205 06:12:35.190242  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:35.191685  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:35.291022  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:35.426531  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:35.690167  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:35.690204  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:35.744287  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:35.925260  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:36.188828  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:36.188918  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:36.288568  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:36.426202  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:36.690024  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:36.690481  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:36.745111  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:36.925596  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:37.198726  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:37.199260  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:37.292441  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:37.426252  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:37.688421  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:37.689354  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:37.744744  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:37.926581  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:38.189726  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:38.190121  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:38.245214  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:38.426084  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:38.690210  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:38.690686  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:38.789571  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:38.932287  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:39.193020  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:39.193457  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:39.244775  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:39.427611  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:39.692983  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:39.693448  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:39.744720  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:39.929153  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:40.196119  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:40.197581  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:40.245146  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:40.427235  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:40.690519  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:40.691322  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:40.789863  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:40.926244  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:41.189598  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:41.189735  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:41.244780  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:41.426262  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:41.686889  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:41.689270  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:41.744358  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:41.925413  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:42.190287  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:42.191988  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:42.245899  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:42.426907  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:42.689160  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:42.689460  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:42.744654  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:42.927435  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:43.187832  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:43.188954  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:43.244896  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:43.426296  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:43.688942  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:43.689096  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:43.744731  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:43.925549  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:44.187616  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:44.188634  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:44.244544  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:44.426350  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:44.686767  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:44.689313  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:44.744626  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:44.925842  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:45.190191  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:45.190306  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:45.245006  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:45.435911  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:45.687798  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:45.688320  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:45.745178  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:45.925865  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:46.190967  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:46.191499  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:46.245936  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:46.428356  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:46.687342  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:46.689632  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:46.744972  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:46.925759  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:47.189211  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:47.189333  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:47.244591  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:47.425992  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:47.688923  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:47.689635  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:47.745043  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:47.925844  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:48.187629  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:48.188699  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:48.244690  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:48.425487  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:48.687431  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:48.690042  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:48.745267  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:48.925976  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:49.189537  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:49.189933  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:49.245115  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:49.425754  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:49.688778  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:49.688930  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:49.744646  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:49.925664  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:50.188244  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:50.189790  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:50.244844  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:50.426006  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:50.688316  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:50.688644  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:50.744726  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:50.926672  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:51.188835  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:51.190335  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:51.244445  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:51.426161  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:51.689057  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:51.689224  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:51.744425  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:51.926410  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:52.187348  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:52.190095  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:52.246645  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:52.428212  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:52.689517  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:52.690506  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:52.744773  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:52.926015  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:53.189787  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:53.190219  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:53.244076  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:53.425404  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:53.689630  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:53.689873  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:53.744682  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:53.926455  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:54.188149  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:54.190333  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:54.244783  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:54.430011  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:54.689668  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:54.690053  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:54.745650  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:54.927716  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:55.188916  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:55.191200  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:55.247879  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:55.427730  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:55.689979  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:55.690266  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:55.745198  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:55.925152  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:56.189401  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:56.189668  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:56.245709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:56.427160  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:56.687118  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:56.688442  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:56.744999  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:56.925894  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:57.187699  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:57.188977  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:57.244864  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:57.426573  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:57.693372  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:57.693700  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:57.744572  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:57.925912  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:58.201099  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:58.204488  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:58.292224  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:58.425620  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:58.687104  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:58.689411  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:58.744638  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:58.926511  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:59.186813  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:59.189852  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:59.244894  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:59.425941  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:59.687657  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:59.689638  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:59.744462  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:59.926349  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:00.206351  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:00.206566  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:00.275875  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:00.432930  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:00.688043  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:00.689388  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:00.744643  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:00.926069  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:01.189887  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:01.190146  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:01.289712  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:01.425799  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:01.690320  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:01.690532  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:01.744570  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:01.925940  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:02.188616  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:02.188793  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:02.244999  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:02.427606  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:02.688042  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:02.689123  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:02.745183  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:02.925358  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:03.188835  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:03.190164  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:03.244965  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:03.425660  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:03.689416  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:03.689594  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:03.744536  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:03.926021  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:04.187143  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:04.188893  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:04.245114  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:04.427279  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:04.688865  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:04.689040  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:04.745224  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:04.925285  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:05.191339  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:05.191599  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:05.280721  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:05.426596  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:05.687374  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:05.689307  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:05.745052  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:05.925197  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:06.187300  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:06.187583  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:06.246439  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:06.426496  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:06.689516  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:06.690969  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:06.745695  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:06.929784  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:07.189611  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:07.190062  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:07.289793  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:07.426212  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:07.690091  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:07.690511  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:07.744621  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:07.926243  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:08.186815  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:08.189130  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:08.244515  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:08.426703  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:08.686702  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:08.688715  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:08.744999  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:08.925412  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:09.187464  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:09.188340  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:09.244577  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:09.426860  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:09.691259  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:09.692969  445166 kapi.go:107] duration metric: took 1m11.007671445s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 06:13:09.745792  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:09.926522  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:10.193284  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:10.244709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:10.426138  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:10.687025  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:10.745480  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:10.926263  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:11.187027  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:11.245257  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:11.425893  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:11.687607  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:11.744851  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:11.926011  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:12.187582  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:12.244891  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:12.426699  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:12.687411  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:12.787924  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:12.925942  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:13.188676  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:13.244217  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:13.428473  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:13.687134  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:13.745318  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:13.925623  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:14.187424  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:14.288440  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:14.427644  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:14.687692  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:14.745621  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:14.926623  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:15.187426  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:15.244769  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:15.429504  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:15.688099  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:15.746094  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:15.928359  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:16.187204  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:16.243984  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:16.426262  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:16.688314  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:16.745347  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:16.925881  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:17.187289  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:17.244450  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:17.425827  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:17.687071  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:17.745141  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:17.926142  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:18.187141  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:18.244881  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:18.425880  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:18.686946  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:18.744955  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:18.925606  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:19.187301  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:19.244769  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:19.429270  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:19.687495  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:19.744036  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:19.926538  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:20.187530  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:20.244946  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:20.435603  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:20.687074  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:20.745055  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:20.925412  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:21.191395  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:21.244884  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:21.426124  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:21.686882  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:21.745513  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:21.926276  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:22.188696  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:22.245429  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:22.426342  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:22.687840  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:22.744956  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:22.925893  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:23.188262  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:23.245230  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:23.425151  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:23.687353  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:23.746059  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:23.925521  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:24.187480  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:24.245265  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:24.425544  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:24.686497  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:24.744409  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:24.925685  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:25.186699  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:25.244575  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:25.426313  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:25.686990  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:25.787042  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:25.926426  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:26.187420  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:26.244705  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:26.425648  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:26.687184  445166 kapi.go:107] duration metric: took 1m28.003757366s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 06:13:26.745031  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:26.925390  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:27.244709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:27.426205  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:27.744896  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:27.925926  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:28.244324  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:28.425610  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:28.744997  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:28.926145  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:29.244620  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:29.427720  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:29.745091  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:29.925456  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:30.244484  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:30.425570  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:30.744480  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:30.938092  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:31.244417  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:31.426072  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:31.744477  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:31.926308  445166 kapi.go:107] duration metric: took 1m33.004287175s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 06:13:32.244569  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:32.745333  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:33.244709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:33.744097  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:34.245485  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:34.745224  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:35.244777  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:35.745118  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:36.244541  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:36.744252  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:37.245092  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:37.744332  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:38.245451  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:38.745106  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:39.244313  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:39.745163  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:40.244570  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:40.745435  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:41.244851  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:41.744098  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:42.246517  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:42.745332  445166 kapi.go:107] duration metric: took 1m40.504115494s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 06:13:42.748292  445166 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-640282 cluster.
	I1205 06:13:42.751060  445166 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 06:13:42.753829  445166 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 06:13:42.756796  445166 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1205 06:13:42.759767  445166 addons.go:530] duration metric: took 1m50.463018388s for enable addons: enabled=[nvidia-device-plugin registry-creds amd-gpu-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1205 06:13:42.759826  445166 start.go:247] waiting for cluster config update ...
	I1205 06:13:42.759847  445166 start.go:256] writing updated cluster config ...
	I1205 06:13:42.760141  445166 ssh_runner.go:195] Run: rm -f paused
	I1205 06:13:42.764527  445166 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:13:42.768283  445166 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jbbkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.772723  445166 pod_ready.go:94] pod "coredns-66bc5c9577-jbbkj" is "Ready"
	I1205 06:13:42.772753  445166 pod_ready.go:86] duration metric: took 4.442269ms for pod "coredns-66bc5c9577-jbbkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.774797  445166 pod_ready.go:83] waiting for pod "etcd-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.778762  445166 pod_ready.go:94] pod "etcd-addons-640282" is "Ready"
	I1205 06:13:42.778790  445166 pod_ready.go:86] duration metric: took 3.968491ms for pod "etcd-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.780921  445166 pod_ready.go:83] waiting for pod "kube-apiserver-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.785038  445166 pod_ready.go:94] pod "kube-apiserver-addons-640282" is "Ready"
	I1205 06:13:42.785061  445166 pod_ready.go:86] duration metric: took 4.116087ms for pod "kube-apiserver-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.787266  445166 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.168467  445166 pod_ready.go:94] pod "kube-controller-manager-addons-640282" is "Ready"
	I1205 06:13:43.168493  445166 pod_ready.go:86] duration metric: took 381.207785ms for pod "kube-controller-manager-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.369268  445166 pod_ready.go:83] waiting for pod "kube-proxy-lnnkp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.768018  445166 pod_ready.go:94] pod "kube-proxy-lnnkp" is "Ready"
	I1205 06:13:43.768056  445166 pod_ready.go:86] duration metric: took 398.762418ms for pod "kube-proxy-lnnkp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.968338  445166 pod_ready.go:83] waiting for pod "kube-scheduler-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:44.368921  445166 pod_ready.go:94] pod "kube-scheduler-addons-640282" is "Ready"
	I1205 06:13:44.368954  445166 pod_ready.go:86] duration metric: took 400.591017ms for pod "kube-scheduler-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:44.368969  445166 pod_ready.go:40] duration metric: took 1.60440847s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:13:44.423652  445166 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1205 06:13:44.427268  445166 out.go:179] * Done! kubectl is now configured to use "addons-640282" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 06:16:27 addons-640282 crio[826]: time="2025-12-05T06:16:27.292592618Z" level=info msg="Removed container 1f2bf15e194d55109988da5393614eeeed7cae77abce3850a3a14b6354264f22: kube-system/registry-creds-764b6fb674-4zwkd/registry-creds" id=d6444b5c-0a63-464f-8194-18fec712466e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.099781884Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-8hrdc/POD" id=b2f3c4c3-5cf8-42ae-b477-6f5fd676cd73 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.0998577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.111795372Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8hrdc Namespace:default ID:5f8114074375c4494d30289d56b46598aaee8116e12c1f6f81c15d15fe8c727b UID:6453e75e-95d1-4828-8d58-edafd81dfdc2 NetNS:/var/run/netns/682ccc38-1e47-4265-9ea0-922da14b8895 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001c1a2a8}] Aliases:map[]}"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.111958484Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-8hrdc to CNI network \"kindnet\" (type=ptp)"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.127364403Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8hrdc Namespace:default ID:5f8114074375c4494d30289d56b46598aaee8116e12c1f6f81c15d15fe8c727b UID:6453e75e-95d1-4828-8d58-edafd81dfdc2 NetNS:/var/run/netns/682ccc38-1e47-4265-9ea0-922da14b8895 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001c1a2a8}] Aliases:map[]}"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.127683922Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-8hrdc for CNI network kindnet (type=ptp)"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.131455893Z" level=info msg="Ran pod sandbox 5f8114074375c4494d30289d56b46598aaee8116e12c1f6f81c15d15fe8c727b with infra container: default/hello-world-app-5d498dc89-8hrdc/POD" id=b2f3c4c3-5cf8-42ae-b477-6f5fd676cd73 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.13643573Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f30cccee-641f-4c3c-9630-fcc3c638a3dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.136565963Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=f30cccee-641f-4c3c-9630-fcc3c638a3dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.136601508Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=f30cccee-641f-4c3c-9630-fcc3c638a3dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.142597586Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=18c7e1f4-c734-4e98-9d4f-9a4945f543cb name=/runtime.v1.ImageService/PullImage
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.145696918Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.7427042Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=18c7e1f4-c734-4e98-9d4f-9a4945f543cb name=/runtime.v1.ImageService/PullImage
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.743506252Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0364b7da-eeac-4bd6-adc1-0bdde998e418 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.746609457Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=46c243d5-d4f0-4854-adf2-d530691e2577 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.756103467Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-8hrdc/hello-world-app" id=94ee73e4-6589-4ee9-9839-cb6b791438b8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.756899735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.771868775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.773273902Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b2f0409cff31cfdc0a7801dada1cd868af51e81efb44db84524daf475f4bec07/merged/etc/passwd: no such file or directory"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.77346574Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b2f0409cff31cfdc0a7801dada1cd868af51e81efb44db84524daf475f4bec07/merged/etc/group: no such file or directory"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.773858417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.796277179Z" level=info msg="Created container e4832549381b6c9c644abc577484c9b864ab71ac2e163408cdea7c9d9f5ba174: default/hello-world-app-5d498dc89-8hrdc/hello-world-app" id=94ee73e4-6589-4ee9-9839-cb6b791438b8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.797664895Z" level=info msg="Starting container: e4832549381b6c9c644abc577484c9b864ab71ac2e163408cdea7c9d9f5ba174" id=8c877bb1-d377-43b7-856b-d1f75eca7452 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 06:16:44 addons-640282 crio[826]: time="2025-12-05T06:16:44.802489505Z" level=info msg="Started container" PID=7114 containerID=e4832549381b6c9c644abc577484c9b864ab71ac2e163408cdea7c9d9f5ba174 description=default/hello-world-app-5d498dc89-8hrdc/hello-world-app id=8c877bb1-d377-43b7-856b-d1f75eca7452 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5f8114074375c4494d30289d56b46598aaee8116e12c1f6f81c15d15fe8c727b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	e4832549381b6       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   5f8114074375c       hello-world-app-5d498dc89-8hrdc            default
	fcca0f36b3fc3       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             18 seconds ago           Exited              registry-creds                           4                   8214458590c8e       registry-creds-764b6fb674-4zwkd            kube-system
	9930b367dcb5f       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   49a4618e3191d       nginx                                      default
	f463a8307b8ec       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   91a020e46ac4f       busybox                                    default
	f0661aef38682       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   91ec0cd7cdf85       gcp-auth-78565c9fb4-xdcct                  gcp-auth
	ae8fe59a87c4c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	ee08f2df7a0e7       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	1343c4e249efa       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	576b9f44bab0b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	dd155a05dcc71       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   1f9299b37c527       ingress-nginx-controller-6c8bf45fb-r8d6p   ingress-nginx
	a02282a8dda4a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   2fe794cafa0df       gadget-62wxv                               gadget
	36207d2abda3a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	fdc19967900f7       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   d420f5d040a14       cloud-spanner-emulator-5bdddb765-4xzt7     default
	8ca95d8216ff9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   a892af72684ee       registry-proxy-nlqwm                       kube-system
	e05ecd19c0205       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   50781fa0e7e0c       csi-hostpath-resizer-0                     kube-system
	2c117baff7e0b       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   43dea51649ec5       nvidia-device-plugin-daemonset-ft52z       kube-system
	1e9a9d06da060       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   2f1e5e72ebe6e       registry-6b586f9694-4sckq                  kube-system
	dfc4a73354f1a       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    1                   3a8fb17d68cf6       ingress-nginx-admission-patch-rl2nx        ingress-nginx
	66ce1a90aa991       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   dd1b4999e33fe       ingress-nginx-admission-create-w7jhq       ingress-nginx
	eb58125e4d2c7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   acbd0037edc74       snapshot-controller-7d9fbc56b8-7kcmn       kube-system
	249d0f3d91c82       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	56309b0868051       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   7d0504f045ecc       kube-ingress-dns-minikube                  kube-system
	513dee4bbb57b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   c60143880a783       snapshot-controller-7d9fbc56b8-8h7tn       kube-system
	2798958d4d891       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   b5a421be82bb3       yakd-dashboard-5ff678cb9-tgblp             yakd-dashboard
	75d36e5745352       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   ea5db399fbede       csi-hostpath-attacher-0                    kube-system
	18303e803325e       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   84b1c08cbaa7c       metrics-server-85b7d694d7-wmgpc            kube-system
	b703017818da9       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   b65f4b9f35b44       local-path-provisioner-648f6765c9-ndg8q    local-path-storage
	8f819a6511b2f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   57c799304a7d8       storage-provisioner                        kube-system
	9e50a765cdd0b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   cf9c182794ffe       coredns-66bc5c9577-jbbkj                   kube-system
	954b5a1cbede7       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   9385f60acfa47       kube-proxy-lnnkp                           kube-system
	afa775c377245       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   44a84584e9a3e       kindnet-bz4mm                              kube-system
	dbaf492de7d0d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   8f1302c750456       kube-scheduler-addons-640282               kube-system
	130424b6298d0       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   92f507b818ad7       kube-controller-manager-addons-640282      kube-system
	ce5973768e215       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   54b06010ae3b0       etcd-addons-640282                         kube-system
	6a73bdffbbb7c       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   0966444cb3d55       kube-apiserver-addons-640282               kube-system
	
	
	==> coredns [9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84] <==
	[INFO] 10.244.0.8:34004 - 9474 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002018736s
	[INFO] 10.244.0.8:34004 - 3470 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000152592s
	[INFO] 10.244.0.8:34004 - 37296 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000200585s
	[INFO] 10.244.0.8:44865 - 5170 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157179s
	[INFO] 10.244.0.8:44865 - 4965 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104231s
	[INFO] 10.244.0.8:37078 - 13901 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085391s
	[INFO] 10.244.0.8:37078 - 14098 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179817s
	[INFO] 10.244.0.8:45875 - 56872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102614s
	[INFO] 10.244.0.8:45875 - 56707 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148998s
	[INFO] 10.244.0.8:36042 - 56699 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00163534s
	[INFO] 10.244.0.8:36042 - 56527 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001752116s
	[INFO] 10.244.0.8:44496 - 41356 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133236s
	[INFO] 10.244.0.8:44496 - 41175 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165598s
	[INFO] 10.244.0.21:33361 - 20419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178307s
	[INFO] 10.244.0.21:59574 - 59840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017775s
	[INFO] 10.244.0.21:53634 - 64002 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000240676s
	[INFO] 10.244.0.21:39121 - 22281 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000350067s
	[INFO] 10.244.0.21:37559 - 15277 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120542s
	[INFO] 10.244.0.21:37560 - 33087 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132777s
	[INFO] 10.244.0.21:55444 - 54378 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00222335s
	[INFO] 10.244.0.21:35327 - 16481 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002373144s
	[INFO] 10.244.0.21:54324 - 24210 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001231643s
	[INFO] 10.244.0.21:42400 - 9481 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001466789s
	[INFO] 10.244.0.24:59367 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000262231s
	[INFO] 10.244.0.24:51541 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157146s
	
	
	==> describe nodes <==
	Name:               addons-640282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-640282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=addons-640282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_11_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-640282
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-640282"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:11:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-640282
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:16:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:16:44 +0000   Fri, 05 Dec 2025 06:11:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:16:44 +0000   Fri, 05 Dec 2025 06:11:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:16:44 +0000   Fri, 05 Dec 2025 06:11:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:16:44 +0000   Fri, 05 Dec 2025 06:12:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-640282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                aa3571c6-896a-4255-aadd-3629cc6297b8
	  Boot ID:                    6438d548-ea0a-487b-93bc-8af12c014d83
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-5bdddb765-4xzt7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  default                     hello-world-app-5d498dc89-8hrdc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-62wxv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  gcp-auth                    gcp-auth-78565c9fb4-xdcct                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-r8d6p    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m47s
	  kube-system                 coredns-66bc5c9577-jbbkj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m52s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpathplugin-dqw5d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 etcd-addons-640282                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m58s
	  kube-system                 kindnet-bz4mm                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m53s
	  kube-system                 kube-apiserver-addons-640282                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-controller-manager-addons-640282       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-proxy-lnnkp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-640282                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 metrics-server-85b7d694d7-wmgpc             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m48s
	  kube-system                 nvidia-device-plugin-daemonset-ft52z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-6b586f9694-4sckq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 registry-creds-764b6fb674-4zwkd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 registry-proxy-nlqwm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 snapshot-controller-7d9fbc56b8-7kcmn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 snapshot-controller-7d9fbc56b8-8h7tn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  local-path-storage          local-path-provisioner-648f6765c9-ndg8q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tgblp              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m51s  kube-proxy       
	  Normal   Starting                 4m58s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m58s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m58s  kubelet          Node addons-640282 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m58s  kubelet          Node addons-640282 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m58s  kubelet          Node addons-640282 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m53s  node-controller  Node addons-640282 event: Registered Node addons-640282 in Controller
	  Normal   NodeReady                4m11s  kubelet          Node addons-640282 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 03:17] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014702] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514036] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693] <==
	{"level":"warn","ts":"2025-12-05T06:11:43.741309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.754872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.771050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.793570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.810169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.859708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.912178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.917660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.923715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.938816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.967934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.972092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.010574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.036840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.043702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.077502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.100071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.119459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.211797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:59.229166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:59.239579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.124449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.139080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.170616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.182056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40506","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f0661aef386820b0e5bb007bc6340e7a81a61798c8a5d657bde32c3a21a9ec07] <==
	2025/12/05 06:13:42 GCP Auth Webhook started!
	2025/12/05 06:13:44 Ready to marshal response ...
	2025/12/05 06:13:44 Ready to write response ...
	2025/12/05 06:13:45 Ready to marshal response ...
	2025/12/05 06:13:45 Ready to write response ...
	2025/12/05 06:13:45 Ready to marshal response ...
	2025/12/05 06:13:45 Ready to write response ...
	2025/12/05 06:14:03 Ready to marshal response ...
	2025/12/05 06:14:03 Ready to write response ...
	2025/12/05 06:14:06 Ready to marshal response ...
	2025/12/05 06:14:06 Ready to write response ...
	2025/12/05 06:14:24 Ready to marshal response ...
	2025/12/05 06:14:24 Ready to write response ...
	2025/12/05 06:14:30 Ready to marshal response ...
	2025/12/05 06:14:30 Ready to write response ...
	2025/12/05 06:14:52 Ready to marshal response ...
	2025/12/05 06:14:52 Ready to write response ...
	2025/12/05 06:14:52 Ready to marshal response ...
	2025/12/05 06:14:52 Ready to write response ...
	2025/12/05 06:15:01 Ready to marshal response ...
	2025/12/05 06:15:01 Ready to write response ...
	2025/12/05 06:16:43 Ready to marshal response ...
	2025/12/05 06:16:43 Ready to write response ...
	
	
	==> kernel <==
	 06:16:45 up  2:58,  0 user,  load average: 0.96, 1.47, 1.56
	Linux addons-640282 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b] <==
	I1205 06:14:43.938558       1 main.go:301] handling current node
	I1205 06:14:53.934546       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:14:53.934622       1 main.go:301] handling current node
	I1205 06:15:03.934500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:15:03.934583       1 main.go:301] handling current node
	I1205 06:15:13.938535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:15:13.938572       1 main.go:301] handling current node
	I1205 06:15:23.935461       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:15:23.935578       1 main.go:301] handling current node
	I1205 06:15:33.941157       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:15:33.941195       1 main.go:301] handling current node
	I1205 06:15:43.942966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:15:43.942999       1 main.go:301] handling current node
	I1205 06:15:53.941460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:15:53.941571       1 main.go:301] handling current node
	I1205 06:16:03.934497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:16:03.934535       1 main.go:301] handling current node
	I1205 06:16:13.935221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:16:13.935332       1 main.go:301] handling current node
	I1205 06:16:23.934810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:16:23.934843       1 main.go:301] handling current node
	I1205 06:16:33.935345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:16:33.935383       1 main.go:301] handling current node
	I1205 06:16:43.934476       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:16:43.934512       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 06:12:40.403972       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.194.127:443: connect: connection refused" logger="UnhandledError"
	W1205 06:12:41.404209       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:12:41.404271       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 06:12:41.404286       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 06:12:41.404327       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:12:41.404387       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 06:12:41.405480       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 06:12:45.327260       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 06:12:45.415788       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:12:45.415915       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 06:12:45.415997       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	E1205 06:12:45.471447       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1205 06:13:55.925710       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40840: use of closed network connection
	I1205 06:14:15.060735       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 06:14:24.118289       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 06:14:24.465201       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.147.179"}
	I1205 06:16:43.957982       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.191.170"}
	
	
	==> kube-controller-manager [130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438] <==
	I1205 06:11:52.154813       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 06:11:52.155156       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1205 06:11:52.155218       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 06:11:52.155270       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1205 06:11:52.155518       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 06:11:52.156083       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:11:52.158453       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:11:52.158503       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 06:11:52.158533       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 06:11:52.161574       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:11:52.161642       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:11:52.161681       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:11:52.161692       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:11:52.161697       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:11:52.168284       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:11:52.173631       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-640282" podCIDRs=["10.244.0.0/24"]
	E1205 06:11:57.516871       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1205 06:12:22.117351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 06:12:22.117504       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1205 06:12:22.117548       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1205 06:12:22.155177       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1205 06:12:22.160280       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1205 06:12:22.218065       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:12:22.260477       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:12:37.145786       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd] <==
	I1205 06:11:53.866345       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:11:53.967905       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:11:54.068544       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:11:54.068591       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:11:54.068662       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:11:54.112977       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:11:54.113050       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:11:54.127340       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:11:54.129996       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:11:54.130021       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:11:54.135841       1 config.go:200] "Starting service config controller"
	I1205 06:11:54.135863       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:11:54.135886       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:11:54.135891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:11:54.135909       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:11:54.135914       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:11:54.148906       1 config.go:309] "Starting node config controller"
	I1205 06:11:54.148928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:11:54.148936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:11:54.236405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:11:54.236449       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:11:54.236487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3] <==
	I1205 06:11:45.780403       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:11:45.780493       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:11:45.780814       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 06:11:45.785574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1205 06:11:45.786597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1205 06:11:45.792055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:11:45.792730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 06:11:45.792824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:11:45.792881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:11:45.793689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:11:45.798591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 06:11:45.798671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:11:45.798725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:11:45.798818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:11:45.798837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:11:45.798881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 06:11:45.798915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:11:45.798979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:11:45.799044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:11:45.799048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:11:45.799175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 06:11:45.799240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:11:45.799383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:11:46.600212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1205 06:11:49.180905       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:15:46 addons-640282 kubelet[1272]: I1205 06:15:46.677657    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4zwkd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:15:46 addons-640282 kubelet[1272]: I1205 06:15:46.678265    1272 scope.go:117] "RemoveContainer" containerID="1f2bf15e194d55109988da5393614eeeed7cae77abce3850a3a14b6354264f22"
	Dec 05 06:15:46 addons-640282 kubelet[1272]: E1205 06:15:46.679010    1272 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-4zwkd_kube-system(fc52dd35-e9bf-4770-aa18-66aac8d15c08)\"" pod="kube-system/registry-creds-764b6fb674-4zwkd" podUID="fc52dd35-e9bf-4770-aa18-66aac8d15c08"
	Dec 05 06:15:47 addons-640282 kubelet[1272]: I1205 06:15:47.796593    1272 scope.go:117] "RemoveContainer" containerID="92be12d21686d407d5526189708ac71d6b7b0d254ffa4e36af8e37651400882a"
	Dec 05 06:15:47 addons-640282 kubelet[1272]: I1205 06:15:47.806245    1272 scope.go:117] "RemoveContainer" containerID="ffd941570a66c72913f362d8782eaca1b47e394f59e0608aaba72282f903aae9"
	Dec 05 06:16:01 addons-640282 kubelet[1272]: I1205 06:16:01.678047    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4zwkd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:01 addons-640282 kubelet[1272]: I1205 06:16:01.678128    1272 scope.go:117] "RemoveContainer" containerID="1f2bf15e194d55109988da5393614eeeed7cae77abce3850a3a14b6354264f22"
	Dec 05 06:16:01 addons-640282 kubelet[1272]: E1205 06:16:01.678314    1272 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-4zwkd_kube-system(fc52dd35-e9bf-4770-aa18-66aac8d15c08)\"" pod="kube-system/registry-creds-764b6fb674-4zwkd" podUID="fc52dd35-e9bf-4770-aa18-66aac8d15c08"
	Dec 05 06:16:02 addons-640282 kubelet[1272]: I1205 06:16:02.678338    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nlqwm" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:12 addons-640282 kubelet[1272]: I1205 06:16:12.678197    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4zwkd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:12 addons-640282 kubelet[1272]: I1205 06:16:12.678272    1272 scope.go:117] "RemoveContainer" containerID="1f2bf15e194d55109988da5393614eeeed7cae77abce3850a3a14b6354264f22"
	Dec 05 06:16:12 addons-640282 kubelet[1272]: E1205 06:16:12.678496    1272 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-4zwkd_kube-system(fc52dd35-e9bf-4770-aa18-66aac8d15c08)\"" pod="kube-system/registry-creds-764b6fb674-4zwkd" podUID="fc52dd35-e9bf-4770-aa18-66aac8d15c08"
	Dec 05 06:16:19 addons-640282 kubelet[1272]: I1205 06:16:19.678038    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-jbbkj" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:26 addons-640282 kubelet[1272]: I1205 06:16:26.678308    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4zwkd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:26 addons-640282 kubelet[1272]: I1205 06:16:26.678431    1272 scope.go:117] "RemoveContainer" containerID="1f2bf15e194d55109988da5393614eeeed7cae77abce3850a3a14b6354264f22"
	Dec 05 06:16:27 addons-640282 kubelet[1272]: I1205 06:16:27.275518    1272 scope.go:117] "RemoveContainer" containerID="1f2bf15e194d55109988da5393614eeeed7cae77abce3850a3a14b6354264f22"
	Dec 05 06:16:27 addons-640282 kubelet[1272]: I1205 06:16:27.275905    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4zwkd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:27 addons-640282 kubelet[1272]: I1205 06:16:27.275960    1272 scope.go:117] "RemoveContainer" containerID="fcca0f36b3fc3e7001c305277aa74fd0a637fc369c7772bca53042118da07c93"
	Dec 05 06:16:27 addons-640282 kubelet[1272]: E1205 06:16:27.276146    1272 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-4zwkd_kube-system(fc52dd35-e9bf-4770-aa18-66aac8d15c08)\"" pod="kube-system/registry-creds-764b6fb674-4zwkd" podUID="fc52dd35-e9bf-4770-aa18-66aac8d15c08"
	Dec 05 06:16:31 addons-640282 kubelet[1272]: I1205 06:16:31.678238    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-ft52z" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:39 addons-640282 kubelet[1272]: I1205 06:16:39.677366    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-4zwkd" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:16:39 addons-640282 kubelet[1272]: I1205 06:16:39.677872    1272 scope.go:117] "RemoveContainer" containerID="fcca0f36b3fc3e7001c305277aa74fd0a637fc369c7772bca53042118da07c93"
	Dec 05 06:16:39 addons-640282 kubelet[1272]: E1205 06:16:39.678109    1272 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-4zwkd_kube-system(fc52dd35-e9bf-4770-aa18-66aac8d15c08)\"" pod="kube-system/registry-creds-764b6fb674-4zwkd" podUID="fc52dd35-e9bf-4770-aa18-66aac8d15c08"
	Dec 05 06:16:43 addons-640282 kubelet[1272]: I1205 06:16:43.888581    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6453e75e-95d1-4828-8d58-edafd81dfdc2-gcp-creds\") pod \"hello-world-app-5d498dc89-8hrdc\" (UID: \"6453e75e-95d1-4828-8d58-edafd81dfdc2\") " pod="default/hello-world-app-5d498dc89-8hrdc"
	Dec 05 06:16:43 addons-640282 kubelet[1272]: I1205 06:16:43.888654    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wklj7\" (UniqueName: \"kubernetes.io/projected/6453e75e-95d1-4828-8d58-edafd81dfdc2-kube-api-access-wklj7\") pod \"hello-world-app-5d498dc89-8hrdc\" (UID: \"6453e75e-95d1-4828-8d58-edafd81dfdc2\") " pod="default/hello-world-app-5d498dc89-8hrdc"
	
	
	==> storage-provisioner [8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108] <==
	W1205 06:16:20.772727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:22.775483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:22.779840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:24.783466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:24.788018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:26.790839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:26.795304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:28.799108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:28.803678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:30.806572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:30.813587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:32.816699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:32.821321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:34.825093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:34.829868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:36.833349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:36.838784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:38.842189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:38.849086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:40.852740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:40.859429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:42.862540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:42.867019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:44.871550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:16:44.879988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-640282 -n addons-640282
helpers_test.go:269: (dbg) Run:  kubectl --context addons-640282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-640282 describe pod ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-640282 describe pod ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx: exit status 1 (84.753323ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w7jhq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rl2nx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-640282 describe pod ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (295.998907ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:16:47.116811  454652 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:16:47.117941  454652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:47.117985  454652 out.go:374] Setting ErrFile to fd 2...
	I1205 06:16:47.118007  454652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:47.118321  454652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:16:47.118691  454652 mustload.go:66] Loading cluster: addons-640282
	I1205 06:16:47.119114  454652 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:16:47.119147  454652 addons.go:622] checking whether the cluster is paused
	I1205 06:16:47.119275  454652 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:16:47.119299  454652 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:16:47.119831  454652 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:16:47.143804  454652 ssh_runner.go:195] Run: systemctl --version
	I1205 06:16:47.143865  454652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:16:47.166793  454652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:16:47.278741  454652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:16:47.278835  454652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:16:47.312499  454652 cri.go:89] found id: "fcca0f36b3fc3e7001c305277aa74fd0a637fc369c7772bca53042118da07c93"
	I1205 06:16:47.312518  454652 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:16:47.312523  454652 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:16:47.312527  454652 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:16:47.312530  454652 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:16:47.312533  454652 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:16:47.312536  454652 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:16:47.312539  454652 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:16:47.312545  454652 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:16:47.312550  454652 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:16:47.312553  454652 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:16:47.312556  454652 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:16:47.312559  454652 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:16:47.312562  454652 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:16:47.312566  454652 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:16:47.312578  454652 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:16:47.312581  454652 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:16:47.312586  454652 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:16:47.312589  454652 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:16:47.312592  454652 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:16:47.312597  454652 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:16:47.312600  454652 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:16:47.312603  454652 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:16:47.312606  454652 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:16:47.312609  454652 cri.go:89] found id: ""
	I1205 06:16:47.312661  454652 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:16:47.332025  454652 out.go:203] 
	W1205 06:16:47.335535  454652 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:16:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:16:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:16:47.335565  454652 out.go:285] * 
	* 
	W1205 06:16:47.342363  454652 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:16:47.346545  454652 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable ingress --alsologtostderr -v=1: exit status 11 (274.614453ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:16:47.403130  454772 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:16:47.403967  454772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:47.404005  454772 out.go:374] Setting ErrFile to fd 2...
	I1205 06:16:47.404026  454772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:16:47.404383  454772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:16:47.404717  454772 mustload.go:66] Loading cluster: addons-640282
	I1205 06:16:47.405151  454772 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:16:47.405190  454772 addons.go:622] checking whether the cluster is paused
	I1205 06:16:47.405339  454772 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:16:47.405370  454772 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:16:47.405931  454772 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:16:47.428555  454772 ssh_runner.go:195] Run: systemctl --version
	I1205 06:16:47.428610  454772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:16:47.448275  454772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:16:47.560831  454772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:16:47.560919  454772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:16:47.594217  454772 cri.go:89] found id: "fcca0f36b3fc3e7001c305277aa74fd0a637fc369c7772bca53042118da07c93"
	I1205 06:16:47.594237  454772 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:16:47.594241  454772 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:16:47.594245  454772 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:16:47.594248  454772 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:16:47.594264  454772 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:16:47.594268  454772 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:16:47.594275  454772 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:16:47.594278  454772 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:16:47.594284  454772 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:16:47.594288  454772 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:16:47.594290  454772 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:16:47.594293  454772 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:16:47.594296  454772 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:16:47.594299  454772 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:16:47.594304  454772 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:16:47.594307  454772 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:16:47.594310  454772 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:16:47.594313  454772 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:16:47.594316  454772 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:16:47.594320  454772 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:16:47.594323  454772 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:16:47.594326  454772 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:16:47.594329  454772 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:16:47.594332  454772 cri.go:89] found id: ""
	I1205 06:16:47.594412  454772 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:16:47.609783  454772 out.go:203] 
	W1205 06:16:47.613045  454772 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:16:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:16:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:16:47.613069  454772 out.go:285] * 
	* 
	W1205 06:16:47.619398  454772 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:16:47.622544  454772 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-62wxv" [442dce63-6465-4b63-97af-ca22f7fcdfa7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002994275s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (257.943043ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:23.594010  452313 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:23.594872  452313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:23.594896  452313 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:23.594902  452313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:23.595271  452313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:23.595672  452313 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:23.596163  452313 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:23.596187  452313 addons.go:622] checking whether the cluster is paused
	I1205 06:14:23.596342  452313 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:23.596362  452313 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:23.596979  452313 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:23.615742  452313 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:23.615813  452313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:23.633658  452313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:23.736882  452313 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:23.737018  452313 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:23.765128  452313 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:23.765151  452313 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:23.765156  452313 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:23.765161  452313 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:23.765164  452313 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:23.765169  452313 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:23.765172  452313 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:23.765175  452313 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:23.765201  452313 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:23.765214  452313 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:23.765218  452313 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:23.765221  452313 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:23.765227  452313 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:23.765230  452313 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:23.765234  452313 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:23.765250  452313 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:23.765258  452313 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:23.765277  452313 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:23.765282  452313 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:23.765285  452313 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:23.765294  452313 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:23.765308  452313 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:23.765312  452313 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:23.765317  452313 cri.go:89] found id: ""
	I1205 06:14:23.765386  452313 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:23.780894  452313 out.go:203] 
	W1205 06:14:23.783782  452313 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:23.783813  452313 out.go:285] * 
	* 
	W1205 06:14:23.790252  452313 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:23.793134  452313 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.105093ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004553189s
addons_test.go:463: (dbg) Run:  kubectl --context addons-640282 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (260.110731ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:18.327465  452230 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:18.328249  452230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:18.328288  452230 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:18.328295  452230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:18.328622  452230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:18.328956  452230 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:18.329395  452230 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:18.329417  452230 addons.go:622] checking whether the cluster is paused
	I1205 06:14:18.329575  452230 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:18.329594  452230 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:18.330210  452230 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:18.347439  452230 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:18.347491  452230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:18.365110  452230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:18.472820  452230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:18.472981  452230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:18.504892  452230 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:18.504954  452230 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:18.504964  452230 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:18.504969  452230 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:18.504981  452230 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:18.504985  452230 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:18.504988  452230 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:18.504992  452230 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:18.504995  452230 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:18.505002  452230 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:18.505020  452230 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:18.505030  452230 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:18.505033  452230 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:18.505036  452230 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:18.505039  452230 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:18.505045  452230 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:18.505051  452230 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:18.505055  452230 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:18.505058  452230 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:18.505061  452230 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:18.505066  452230 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:18.505069  452230 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:18.505072  452230 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:18.505075  452230 cri.go:89] found id: ""
	I1205 06:14:18.505139  452230 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:18.520400  452230 out.go:203] 
	W1205 06:14:18.523268  452230 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:18.523290  452230 out.go:285] * 
	* 
	W1205 06:14:18.529614  452230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:18.532501  452230 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 06:13:59.507337  444147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 06:13:59.511430  444147 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 06:13:59.511456  444147 kapi.go:107] duration metric: took 4.131382ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.141564ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-640282 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-640282 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4d802520-352d-44b7-844f-d82cf90401fb] Pending
helpers_test.go:352: "task-pv-pod" [4d802520-352d-44b7-844f-d82cf90401fb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [4d802520-352d-44b7-844f-d82cf90401fb] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003406924s
addons_test.go:572: (dbg) Run:  kubectl --context addons-640282 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-640282 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-640282 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-640282 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-640282 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-640282 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-640282 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [817653fa-bde0-47d6-970e-bd5570028bff] Pending
helpers_test.go:352: "task-pv-pod-restore" [817653fa-bde0-47d6-970e-bd5570028bff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [817653fa-bde0-47d6-970e-bd5570028bff] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004109176s
addons_test.go:614: (dbg) Run:  kubectl --context addons-640282 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-640282 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-640282 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (256.583291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:38.245298  452922 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:38.246100  452922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:38.246112  452922 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:38.246117  452922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:38.246422  452922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:38.246730  452922 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:38.247119  452922 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:38.247137  452922 addons.go:622] checking whether the cluster is paused
	I1205 06:14:38.247247  452922 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:38.247263  452922 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:38.247799  452922 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:38.266456  452922 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:38.266515  452922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:38.283688  452922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:38.392871  452922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:38.392977  452922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:38.423577  452922 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:38.423601  452922 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:38.423606  452922 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:38.423610  452922 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:38.423613  452922 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:38.423617  452922 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:38.423620  452922 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:38.423623  452922 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:38.423627  452922 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:38.423634  452922 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:38.423638  452922 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:38.423641  452922 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:38.423644  452922 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:38.423648  452922 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:38.423651  452922 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:38.423660  452922 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:38.423668  452922 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:38.423673  452922 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:38.423676  452922 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:38.423679  452922 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:38.423685  452922 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:38.423688  452922 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:38.423695  452922 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:38.423708  452922 cri.go:89] found id: ""
	I1205 06:14:38.423760  452922 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:38.438302  452922 out.go:203] 
	W1205 06:14:38.441221  452922 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:38.441245  452922 out.go:285] * 
	* 
	W1205 06:14:38.447838  452922 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:38.450627  452922 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (277.696118ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:38.508327  452966 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:38.509107  452966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:38.509123  452966 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:38.509130  452966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:38.509435  452966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:38.509767  452966 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:38.510259  452966 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:38.510279  452966 addons.go:622] checking whether the cluster is paused
	I1205 06:14:38.510495  452966 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:38.510516  452966 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:38.511050  452966 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:38.531557  452966 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:38.531614  452966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:38.549372  452966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:38.657558  452966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:38.657652  452966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:38.693894  452966 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:38.693916  452966 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:38.693921  452966 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:38.693926  452966 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:38.693929  452966 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:38.693933  452966 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:38.693936  452966 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:38.693940  452966 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:38.693943  452966 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:38.693959  452966 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:38.693963  452966 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:38.693966  452966 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:38.693969  452966 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:38.693972  452966 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:38.693976  452966 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:38.693984  452966 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:38.693993  452966 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:38.693998  452966 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:38.694001  452966 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:38.694004  452966 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:38.694009  452966 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:38.694012  452966 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:38.694015  452966 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:38.694018  452966 cri.go:89] found id: ""
	I1205 06:14:38.694076  452966 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:38.713727  452966 out.go:203] 
	W1205 06:14:38.717497  452966 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:38.717524  452966 out.go:285] * 
	* 
	W1205 06:14:38.724895  452966 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:38.728531  452966 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (39.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-640282 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-640282 --alsologtostderr -v=1: exit status 11 (272.128617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:13:56.263019  451247 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:13:56.263878  451247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:56.263926  451247 out.go:374] Setting ErrFile to fd 2...
	I1205 06:13:56.263949  451247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:56.264239  451247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:13:56.264572  451247 mustload.go:66] Loading cluster: addons-640282
	I1205 06:13:56.265002  451247 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:56.265047  451247 addons.go:622] checking whether the cluster is paused
	I1205 06:13:56.265181  451247 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:56.265215  451247 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:13:56.265758  451247 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:13:56.288644  451247 ssh_runner.go:195] Run: systemctl --version
	I1205 06:13:56.288696  451247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:13:56.306679  451247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:13:56.414054  451247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:13:56.414154  451247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:13:56.445731  451247 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:13:56.445756  451247 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:13:56.445761  451247 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:13:56.445765  451247 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:13:56.445769  451247 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:13:56.445773  451247 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:13:56.445776  451247 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:13:56.445779  451247 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:13:56.445782  451247 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:13:56.445788  451247 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:13:56.445792  451247 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:13:56.445795  451247 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:13:56.445798  451247 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:13:56.445802  451247 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:13:56.445805  451247 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:13:56.445820  451247 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:13:56.445827  451247 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:13:56.445832  451247 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:13:56.445835  451247 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:13:56.445839  451247 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:13:56.445843  451247 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:13:56.445846  451247 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:13:56.445849  451247 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:13:56.445852  451247 cri.go:89] found id: ""
	I1205 06:13:56.445909  451247 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:13:56.461866  451247 out.go:203] 
	W1205 06:13:56.465161  451247 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:13:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:13:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:13:56.465193  451247 out.go:285] * 
	* 
	W1205 06:13:56.471859  451247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:13:56.474985  451247 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-640282 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-640282
helpers_test.go:243: (dbg) docker inspect addons-640282:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005",
	        "Created": "2025-12-05T06:11:22.483403755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 445559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:11:22.55725687Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/hostname",
	        "HostsPath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/hosts",
	        "LogPath": "/var/lib/docker/containers/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005-json.log",
	        "Name": "/addons-640282",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-640282:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-640282",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005",
	                "LowerDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5bf431f08a7411f5dcc1977988fc41688f649c0e6a6320168bf9944a9c1a95b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-640282",
	                "Source": "/var/lib/docker/volumes/addons-640282/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-640282",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-640282",
	                "name.minikube.sigs.k8s.io": "addons-640282",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b02610bda796e6596fde1e86088c0cf7e74f122abeb77fe4fbf4f775488f26d5",
	            "SandboxKey": "/var/run/docker/netns/b02610bda796",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-640282": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:e5:55:99:0c:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8171e975fc82aee656b3853dc5b0661bbcc39adb7da0925bfde854ed0e4cc72",
	                    "EndpointID": "8e2f4cebdc4acd5b3219427e24bebcd06ae5f1e40f797a05a7cd7a7d70451631",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-640282",
	                        "b467876b75d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-640282 -n addons-640282
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-640282 logs -n 25: (1.571082835s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-199160 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-199160   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-199160                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-199160   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ -o=json --download-only -p download-only-820804 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-820804   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-820804                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-820804   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ -o=json --download-only -p download-only-120801 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-120801   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-120801                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-120801   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-199160                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-199160   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-820804                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-820804   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-120801                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-120801   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ --download-only -p download-docker-885973 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-885973 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ -p download-docker-885973                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-885973 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ --download-only -p binary-mirror-589348 --alsologtostderr --binary-mirror http://127.0.0.1:36515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-589348   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ -p binary-mirror-589348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-589348   │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ addons  │ enable dashboard -p addons-640282                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-640282                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ start   │ -p addons-640282 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:13 UTC │
	│ addons  │ addons-640282 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ addons  │ addons-640282 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-640282 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-640282          │ jenkins │ v1.37.0 │ 05 Dec 25 06:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:11:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:11:16.369775  445166 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:11:16.369970  445166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:16.370002  445166 out.go:374] Setting ErrFile to fd 2...
	I1205 06:11:16.370022  445166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:16.370305  445166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:11:16.370866  445166 out.go:368] Setting JSON to false
	I1205 06:11:16.371701  445166 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10404,"bootTime":1764904673,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:11:16.371802  445166 start.go:143] virtualization:  
	I1205 06:11:16.375212  445166 out.go:179] * [addons-640282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:11:16.378304  445166 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:11:16.378404  445166 notify.go:221] Checking for updates...
	I1205 06:11:16.383905  445166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:11:16.386751  445166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:11:16.389521  445166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:11:16.392589  445166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:11:16.395434  445166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:11:16.398522  445166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:11:16.435359  445166 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:11:16.435502  445166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:16.496570  445166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:11:16.487540451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:16.496685  445166 docker.go:319] overlay module found
	I1205 06:11:16.499680  445166 out.go:179] * Using the docker driver based on user configuration
	I1205 06:11:16.502534  445166 start.go:309] selected driver: docker
	I1205 06:11:16.502557  445166 start.go:927] validating driver "docker" against <nil>
	I1205 06:11:16.502572  445166 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:11:16.503326  445166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:16.555478  445166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:11:16.546844332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:16.555639  445166 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:11:16.555862  445166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:11:16.558667  445166 out.go:179] * Using Docker driver with root privileges
	I1205 06:11:16.561371  445166 cni.go:84] Creating CNI manager for ""
	I1205 06:11:16.561433  445166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:11:16.561445  445166 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:11:16.561515  445166 start.go:353] cluster config:
	{Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1205 06:11:16.564587  445166 out.go:179] * Starting "addons-640282" primary control-plane node in "addons-640282" cluster
	I1205 06:11:16.567379  445166 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:11:16.570183  445166 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:11:16.572963  445166 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:11:16.573034  445166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1205 06:11:16.573049  445166 cache.go:65] Caching tarball of preloaded images
	I1205 06:11:16.573053  445166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:11:16.573134  445166 preload.go:238] Found /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 06:11:16.573145  445166 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 06:11:16.573485  445166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/config.json ...
	I1205 06:11:16.573506  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/config.json: {Name:mkfabe31521d55786406320c487f12d681aef468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:16.591806  445166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:11:16.591832  445166 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 06:11:16.591852  445166 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:11:16.591882  445166 start.go:360] acquireMachinesLock for addons-640282: {Name:mk3b6d44b6b925e5bf07bbdf6658ad19c10866d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:11:16.591991  445166 start.go:364] duration metric: took 88.69µs to acquireMachinesLock for "addons-640282"
	I1205 06:11:16.592021  445166 start.go:93] Provisioning new machine with config: &{Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:11:16.592090  445166 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:11:16.595509  445166 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1205 06:11:16.595758  445166 start.go:159] libmachine.API.Create for "addons-640282" (driver="docker")
	I1205 06:11:16.595799  445166 client.go:173] LocalClient.Create starting
	I1205 06:11:16.595908  445166 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem
	I1205 06:11:17.176454  445166 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem
	I1205 06:11:17.399092  445166 cli_runner.go:164] Run: docker network inspect addons-640282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 06:11:17.416138  445166 cli_runner.go:211] docker network inspect addons-640282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 06:11:17.416228  445166 network_create.go:284] running [docker network inspect addons-640282] to gather additional debugging logs...
	I1205 06:11:17.416252  445166 cli_runner.go:164] Run: docker network inspect addons-640282
	W1205 06:11:17.434564  445166 cli_runner.go:211] docker network inspect addons-640282 returned with exit code 1
	I1205 06:11:17.434596  445166 network_create.go:287] error running [docker network inspect addons-640282]: docker network inspect addons-640282: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-640282 not found
	I1205 06:11:17.434609  445166 network_create.go:289] output of [docker network inspect addons-640282]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-640282 not found
	
	** /stderr **
	I1205 06:11:17.434701  445166 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:11:17.451053  445166 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ae9030}
	I1205 06:11:17.451096  445166 network_create.go:124] attempt to create docker network addons-640282 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 06:11:17.451156  445166 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-640282 addons-640282
	I1205 06:11:17.518259  445166 network_create.go:108] docker network addons-640282 192.168.49.0/24 created
	I1205 06:11:17.518308  445166 kic.go:121] calculated static IP "192.168.49.2" for the "addons-640282" container
	I1205 06:11:17.518468  445166 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 06:11:17.537482  445166 cli_runner.go:164] Run: docker volume create addons-640282 --label name.minikube.sigs.k8s.io=addons-640282 --label created_by.minikube.sigs.k8s.io=true
	I1205 06:11:17.555857  445166 oci.go:103] Successfully created a docker volume addons-640282
	I1205 06:11:17.555968  445166 cli_runner.go:164] Run: docker run --rm --name addons-640282-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-640282 --entrypoint /usr/bin/test -v addons-640282:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 06:11:18.449149  445166 oci.go:107] Successfully prepared a docker volume addons-640282
	I1205 06:11:18.449221  445166 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:11:18.449231  445166 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 06:11:18.449298  445166 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-640282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 06:11:22.417567  445166 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-640282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.968231094s)
	I1205 06:11:22.417604  445166 kic.go:203] duration metric: took 3.968369663s to extract preloaded images to volume ...
	W1205 06:11:22.417756  445166 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 06:11:22.417878  445166 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:11:22.467199  445166 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-640282 --name addons-640282 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-640282 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-640282 --network addons-640282 --ip 192.168.49.2 --volume addons-640282:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 06:11:22.788731  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Running}}
	I1205 06:11:22.813080  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:22.841190  445166 cli_runner.go:164] Run: docker exec addons-640282 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:11:22.892656  445166 oci.go:144] the created container "addons-640282" has a running status.
	I1205 06:11:22.892685  445166 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa...
	I1205 06:11:23.327802  445166 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:11:23.350264  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:23.366943  445166 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:11:23.366965  445166 kic_runner.go:114] Args: [docker exec --privileged addons-640282 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:11:23.407362  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:23.426449  445166 machine.go:94] provisionDockerMachine start ...
	I1205 06:11:23.426552  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:23.443279  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:23.443617  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:23.443632  445166 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:11:23.444269  445166 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45202->127.0.0.1:33133: read: connection reset by peer
	I1205 06:11:26.593993  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-640282
	
	I1205 06:11:26.594020  445166 ubuntu.go:182] provisioning hostname "addons-640282"
	I1205 06:11:26.594107  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:26.611695  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:26.612027  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:26.612043  445166 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-640282 && echo "addons-640282" | sudo tee /etc/hostname
	I1205 06:11:26.771676  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-640282
	
	I1205 06:11:26.771789  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:26.788484  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:26.788805  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:26.788831  445166 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-640282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-640282/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-640282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:11:26.938707  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:11:26.938734  445166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:11:26.938760  445166 ubuntu.go:190] setting up certificates
	I1205 06:11:26.938770  445166 provision.go:84] configureAuth start
	I1205 06:11:26.938836  445166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-640282
	I1205 06:11:26.955736  445166 provision.go:143] copyHostCerts
	I1205 06:11:26.955825  445166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:11:26.955968  445166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:11:26.956044  445166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:11:26.956107  445166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.addons-640282 san=[127.0.0.1 192.168.49.2 addons-640282 localhost minikube]
	I1205 06:11:27.335721  445166 provision.go:177] copyRemoteCerts
	I1205 06:11:27.335802  445166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:11:27.335853  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:27.353058  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:27.458434  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:11:27.475924  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 06:11:27.493804  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 06:11:27.512020  445166 provision.go:87] duration metric: took 573.233315ms to configureAuth
	I1205 06:11:27.512051  445166 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:11:27.512242  445166 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:11:27.512356  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:27.530211  445166 main.go:143] libmachine: Using SSH client type: native
	I1205 06:11:27.530556  445166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1205 06:11:27.530577  445166 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:11:28.037414  445166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:11:28.037439  445166 machine.go:97] duration metric: took 4.610959338s to provisionDockerMachine
	I1205 06:11:28.037450  445166 client.go:176] duration metric: took 11.441641191s to LocalClient.Create
	I1205 06:11:28.037463  445166 start.go:167] duration metric: took 11.44170699s to libmachine.API.Create "addons-640282"
	I1205 06:11:28.037470  445166 start.go:293] postStartSetup for "addons-640282" (driver="docker")
	I1205 06:11:28.037481  445166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:11:28.037553  445166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:11:28.037604  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.055522  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.162242  445166 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:11:28.165325  445166 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:11:28.165358  445166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:11:28.165370  445166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:11:28.165449  445166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:11:28.165477  445166 start.go:296] duration metric: took 128.001024ms for postStartSetup
	I1205 06:11:28.165787  445166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-640282
	I1205 06:11:28.182748  445166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/config.json ...
	I1205 06:11:28.183037  445166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:11:28.183096  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.198893  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.299231  445166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:11:28.303779  445166 start.go:128] duration metric: took 11.711673025s to createHost
	I1205 06:11:28.303803  445166 start.go:83] releasing machines lock for "addons-640282", held for 11.711798393s
	I1205 06:11:28.303875  445166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-640282
	I1205 06:11:28.320568  445166 ssh_runner.go:195] Run: cat /version.json
	I1205 06:11:28.320622  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.320630  445166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:11:28.320722  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:28.339107  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.339005  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:28.437934  445166 ssh_runner.go:195] Run: systemctl --version
	I1205 06:11:28.527956  445166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:11:28.569451  445166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:11:28.575021  445166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:11:28.575098  445166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:11:28.603359  445166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 06:11:28.603385  445166 start.go:496] detecting cgroup driver to use...
	I1205 06:11:28.603418  445166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:11:28.603468  445166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:11:28.621615  445166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:11:28.634577  445166 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:11:28.634649  445166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:11:28.652661  445166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:11:28.671405  445166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:11:28.789192  445166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:11:28.908748  445166 docker.go:234] disabling docker service ...
	I1205 06:11:28.908866  445166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:11:28.930094  445166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:11:28.943461  445166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:11:29.057995  445166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:11:29.181548  445166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:11:29.193544  445166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:11:29.207788  445166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:11:29.207855  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.216651  445166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:11:29.216729  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.226004  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.235272  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.244180  445166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:11:29.252443  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.261353  445166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.274828  445166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:11:29.283570  445166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:11:29.290819  445166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:11:29.298171  445166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:11:29.411464  445166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:11:29.581960  445166 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:11:29.582102  445166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:11:29.586505  445166 start.go:564] Will wait 60s for crictl version
	I1205 06:11:29.586618  445166 ssh_runner.go:195] Run: which crictl
	I1205 06:11:29.589997  445166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:11:29.627308  445166 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:11:29.627480  445166 ssh_runner.go:195] Run: crio --version
	I1205 06:11:29.659181  445166 ssh_runner.go:195] Run: crio --version
	I1205 06:11:29.692016  445166 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 06:11:29.695080  445166 cli_runner.go:164] Run: docker network inspect addons-640282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:11:29.711113  445166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:11:29.714784  445166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:11:29.724841  445166 kubeadm.go:884] updating cluster {Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:11:29.724953  445166 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:11:29.725013  445166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:11:29.763299  445166 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:11:29.763325  445166 crio.go:433] Images already preloaded, skipping extraction
	I1205 06:11:29.763422  445166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:11:29.788193  445166 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:11:29.788214  445166 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:11:29.788223  445166 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1205 06:11:29.788310  445166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-640282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:11:29.788395  445166 ssh_runner.go:195] Run: crio config
	I1205 06:11:29.859554  445166 cni.go:84] Creating CNI manager for ""
	I1205 06:11:29.859582  445166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:11:29.859603  445166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:11:29.859664  445166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-640282 NodeName:addons-640282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:11:29.859805  445166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-640282"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:11:29.859890  445166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 06:11:29.867780  445166 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:11:29.867854  445166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:11:29.876061  445166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 06:11:29.889911  445166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:11:29.903308  445166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1205 06:11:29.916745  445166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:11:29.920487  445166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:11:29.930362  445166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:11:30.084706  445166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:11:30.103961  445166 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282 for IP: 192.168.49.2
	I1205 06:11:30.104035  445166 certs.go:195] generating shared ca certs ...
	I1205 06:11:30.104067  445166 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.104275  445166 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:11:30.258433  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt ...
	I1205 06:11:30.258469  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt: {Name:mkc2f548ae0c6064e6a11ca99f32f9d80c761c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.258672  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key ...
	I1205 06:11:30.258684  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key: {Name:mk8b83813271d8b8513855033f159e0bd161be36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.258771  445166 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:11:30.363081  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt ...
	I1205 06:11:30.363107  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt: {Name:mk659828d6931ed8fef790edec4dd3c58c1614a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.363264  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key ...
	I1205 06:11:30.363277  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key: {Name:mk9ffcb7450ee6a545b8f33867c7db50ce955e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.363357  445166 certs.go:257] generating profile certs ...
	I1205 06:11:30.363413  445166 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.key
	I1205 06:11:30.363428  445166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt with IP's: []
	I1205 06:11:30.799471  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt ...
	I1205 06:11:30.799501  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: {Name:mk903c818a43bab9c5ec892bf6c703541a485eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.799681  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.key ...
	I1205 06:11:30.799695  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.key: {Name:mk79279a33252ca16bf2f611f598296c5131cab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.799768  445166 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b
	I1205 06:11:30.799791  445166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 06:11:30.882320  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b ...
	I1205 06:11:30.882350  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b: {Name:mkab35c912beeffc9cb6b43be7cd3582198691c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.882554  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b ...
	I1205 06:11:30.882569  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b: {Name:mk1b086178b0ae040c441f1e9e1a10caad913a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.882658  445166 certs.go:382] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt.2ed4139b -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt
	I1205 06:11:30.882743  445166 certs.go:386] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key.2ed4139b -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key
	I1205 06:11:30.882801  445166 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key
	I1205 06:11:30.882828  445166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt with IP's: []
	I1205 06:11:30.944941  445166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt ...
	I1205 06:11:30.944972  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt: {Name:mk6e6f618554910879efe0dd57b84948ea2e4685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.945161  445166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key ...
	I1205 06:11:30.945181  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key: {Name:mk9d46bb0f45e8838f8bb4c5d7f69899789a9817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:30.945369  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:11:30.945417  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:11:30.945448  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:11:30.945483  445166 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:11:30.946054  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:11:30.965590  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:11:30.985407  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:11:31.004188  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:11:31.024392  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 06:11:31.041926  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:11:31.060710  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:11:31.079262  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:11:31.098776  445166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:11:31.118174  445166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:11:31.131663  445166 ssh_runner.go:195] Run: openssl version
	I1205 06:11:31.138635  445166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.146510  445166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:11:31.154529  445166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.158818  445166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.158931  445166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:11:31.200028  445166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:11:31.207501  445166 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:11:31.214800  445166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:11:31.218361  445166 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:11:31.218450  445166 kubeadm.go:401] StartCluster: {Name:addons-640282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-640282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:11:31.218582  445166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:11:31.218656  445166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:11:31.245116  445166 cri.go:89] found id: ""
	I1205 06:11:31.245188  445166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:11:31.252673  445166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:11:31.260250  445166 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:11:31.260334  445166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:11:31.267783  445166 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:11:31.267849  445166 kubeadm.go:158] found existing configuration files:
	
	I1205 06:11:31.267913  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 06:11:31.275983  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:11:31.276079  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:11:31.283224  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 06:11:31.290980  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:11:31.291045  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:11:31.298146  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 06:11:31.306021  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:11:31.306136  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:11:31.313393  445166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 06:11:31.320898  445166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:11:31.320985  445166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:11:31.328464  445166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:11:31.364887  445166 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 06:11:31.365195  445166 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:11:31.390108  445166 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:11:31.390184  445166 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:11:31.390223  445166 kubeadm.go:319] OS: Linux
	I1205 06:11:31.390272  445166 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:11:31.390335  445166 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:11:31.390423  445166 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:11:31.390477  445166 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:11:31.390527  445166 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:11:31.390578  445166 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:11:31.390632  445166 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:11:31.390684  445166 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:11:31.390734  445166 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:11:31.475451  445166 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:11:31.475568  445166 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:11:31.475665  445166 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:11:31.495474  445166 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:11:31.498477  445166 out.go:252]   - Generating certificates and keys ...
	I1205 06:11:31.498572  445166 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:11:31.498642  445166 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:11:31.975093  445166 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:11:32.107640  445166 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:11:32.221442  445166 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:11:32.808177  445166 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:11:33.471655  445166 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:11:33.471800  445166 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-640282 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:11:33.644154  445166 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:11:33.644311  445166 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-640282 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:11:34.315678  445166 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:11:34.897200  445166 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:11:35.464278  445166 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:11:35.464593  445166 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:11:36.212861  445166 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:11:36.456854  445166 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:11:36.925584  445166 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:11:37.862726  445166 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:11:38.683842  445166 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:11:38.684689  445166 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:11:38.687424  445166 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:11:38.690865  445166 out.go:252]   - Booting up control plane ...
	I1205 06:11:38.690976  445166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:11:38.691053  445166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:11:38.691428  445166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:11:38.708841  445166 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:11:38.709128  445166 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:11:38.716847  445166 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:11:38.717255  445166 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:11:38.717492  445166 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:11:38.850898  445166 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:11:38.851019  445166 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:11:39.853911  445166 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00178822s
	I1205 06:11:39.854571  445166 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 06:11:39.854672  445166 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1205 06:11:39.854765  445166 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 06:11:39.854847  445166 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 06:11:44.514306  445166 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.658727217s
	I1205 06:11:45.793971  445166 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.939381352s
	I1205 06:11:46.855985  445166 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001369358s
	I1205 06:11:46.888437  445166 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 06:11:46.903478  445166 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 06:11:46.920307  445166 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 06:11:46.920759  445166 kubeadm.go:319] [mark-control-plane] Marking the node addons-640282 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 06:11:46.934981  445166 kubeadm.go:319] [bootstrap-token] Using token: 3i7si6.iz5o6rwdyxx3bnn8
	I1205 06:11:46.937907  445166 out.go:252]   - Configuring RBAC rules ...
	I1205 06:11:46.938036  445166 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 06:11:46.944044  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 06:11:46.951833  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 06:11:46.958061  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 06:11:46.961892  445166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 06:11:46.965763  445166 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 06:11:47.265073  445166 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 06:11:47.687824  445166 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1205 06:11:48.262566  445166 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1205 06:11:48.263806  445166 kubeadm.go:319] 
	I1205 06:11:48.263891  445166 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1205 06:11:48.263905  445166 kubeadm.go:319] 
	I1205 06:11:48.263984  445166 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1205 06:11:48.263993  445166 kubeadm.go:319] 
	I1205 06:11:48.264018  445166 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1205 06:11:48.264079  445166 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 06:11:48.264133  445166 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 06:11:48.264143  445166 kubeadm.go:319] 
	I1205 06:11:48.264198  445166 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1205 06:11:48.264206  445166 kubeadm.go:319] 
	I1205 06:11:48.264254  445166 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 06:11:48.264263  445166 kubeadm.go:319] 
	I1205 06:11:48.264314  445166 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1205 06:11:48.264393  445166 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 06:11:48.264465  445166 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 06:11:48.264472  445166 kubeadm.go:319] 
	I1205 06:11:48.264556  445166 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 06:11:48.264637  445166 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1205 06:11:48.264645  445166 kubeadm.go:319] 
	I1205 06:11:48.264729  445166 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3i7si6.iz5o6rwdyxx3bnn8 \
	I1205 06:11:48.264836  445166 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d5281293bcbe7e3015ce386b372a929210d99fe4b4fbe4b31e7ad560f07d8f20 \
	I1205 06:11:48.264859  445166 kubeadm.go:319] 	--control-plane 
	I1205 06:11:48.264867  445166 kubeadm.go:319] 
	I1205 06:11:48.264952  445166 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1205 06:11:48.264961  445166 kubeadm.go:319] 
	I1205 06:11:48.265044  445166 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3i7si6.iz5o6rwdyxx3bnn8 \
	I1205 06:11:48.265150  445166 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d5281293bcbe7e3015ce386b372a929210d99fe4b4fbe4b31e7ad560f07d8f20 
	I1205 06:11:48.267978  445166 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1205 06:11:48.268205  445166 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:11:48.268312  445166 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:11:48.268333  445166 cni.go:84] Creating CNI manager for ""
	I1205 06:11:48.268341  445166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:11:48.271492  445166 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 06:11:48.274459  445166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 06:11:48.278706  445166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 06:11:48.278727  445166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 06:11:48.292031  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 06:11:48.578604  445166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 06:11:48.578678  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:48.578795  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-640282 minikube.k8s.io/updated_at=2025_12_05T06_11_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45 minikube.k8s.io/name=addons-640282 minikube.k8s.io/primary=true
	I1205 06:11:48.694263  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:48.767919  445166 ops.go:34] apiserver oom_adj: -16
	I1205 06:11:49.194466  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:49.694347  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:50.194399  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:50.695008  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:51.195236  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:51.694529  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:52.194848  445166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 06:11:52.295493  445166 kubeadm.go:1114] duration metric: took 3.716870526s to wait for elevateKubeSystemPrivileges
	I1205 06:11:52.295521  445166 kubeadm.go:403] duration metric: took 21.077074658s to StartCluster
	I1205 06:11:52.295539  445166 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:52.295658  445166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:11:52.296114  445166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:52.296307  445166 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:11:52.296444  445166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 06:11:52.296700  445166 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:11:52.296739  445166 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 06:11:52.296811  445166 addons.go:70] Setting yakd=true in profile "addons-640282"
	I1205 06:11:52.296828  445166 addons.go:239] Setting addon yakd=true in "addons-640282"
	I1205 06:11:52.296849  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.297365  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.297615  445166 addons.go:70] Setting inspektor-gadget=true in profile "addons-640282"
	I1205 06:11:52.297634  445166 addons.go:239] Setting addon inspektor-gadget=true in "addons-640282"
	I1205 06:11:52.297658  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.298090  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.298316  445166 addons.go:70] Setting metrics-server=true in profile "addons-640282"
	I1205 06:11:52.298339  445166 addons.go:239] Setting addon metrics-server=true in "addons-640282"
	I1205 06:11:52.298368  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.298841  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.299142  445166 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-640282"
	I1205 06:11:52.299170  445166 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-640282"
	I1205 06:11:52.299193  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.299594  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.306539  445166 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-640282"
	I1205 06:11:52.306575  445166 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-640282"
	I1205 06:11:52.306617  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.307089  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.307936  445166 addons.go:70] Setting cloud-spanner=true in profile "addons-640282"
	I1205 06:11:52.307964  445166 addons.go:239] Setting addon cloud-spanner=true in "addons-640282"
	I1205 06:11:52.307996  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.308423  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.308883  445166 addons.go:70] Setting registry=true in profile "addons-640282"
	I1205 06:11:52.308905  445166 addons.go:239] Setting addon registry=true in "addons-640282"
	I1205 06:11:52.308930  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.309354  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.316350  445166 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-640282"
	I1205 06:11:52.316426  445166 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-640282"
	I1205 06:11:52.316457  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.316915  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.326520  445166 addons.go:70] Setting registry-creds=true in profile "addons-640282"
	I1205 06:11:52.326552  445166 addons.go:239] Setting addon registry-creds=true in "addons-640282"
	I1205 06:11:52.326588  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.327056  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.332502  445166 addons.go:70] Setting default-storageclass=true in profile "addons-640282"
	I1205 06:11:52.332549  445166 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-640282"
	I1205 06:11:52.332918  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.348125  445166 addons.go:70] Setting gcp-auth=true in profile "addons-640282"
	I1205 06:11:52.348162  445166 mustload.go:66] Loading cluster: addons-640282
	I1205 06:11:52.348377  445166 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:11:52.351375  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.354596  445166 addons.go:70] Setting storage-provisioner=true in profile "addons-640282"
	I1205 06:11:52.354626  445166 addons.go:239] Setting addon storage-provisioner=true in "addons-640282"
	I1205 06:11:52.354660  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.355143  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.397956  445166 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-640282"
	I1205 06:11:52.397991  445166 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-640282"
	I1205 06:11:52.398344  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.408590  445166 addons.go:70] Setting ingress=true in profile "addons-640282"
	I1205 06:11:52.408628  445166 addons.go:239] Setting addon ingress=true in "addons-640282"
	I1205 06:11:52.408679  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.409178  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.424885  445166 addons.go:70] Setting ingress-dns=true in profile "addons-640282"
	I1205 06:11:52.424923  445166 addons.go:239] Setting addon ingress-dns=true in "addons-640282"
	I1205 06:11:52.424964  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.425005  445166 addons.go:70] Setting volcano=true in profile "addons-640282"
	I1205 06:11:52.425036  445166 addons.go:239] Setting addon volcano=true in "addons-640282"
	I1205 06:11:52.425063  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.425466  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.425474  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.435433  445166 out.go:179] * Verifying Kubernetes components...
	I1205 06:11:52.438536  445166 addons.go:70] Setting volumesnapshots=true in profile "addons-640282"
	I1205 06:11:52.438574  445166 addons.go:239] Setting addon volumesnapshots=true in "addons-640282"
	I1205 06:11:52.438622  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.439079  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.465658  445166 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 06:11:52.468533  445166 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1205 06:11:52.494020  445166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:11:52.494304  445166 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 06:11:52.501806  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 06:11:52.501882  445166 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 06:11:52.501982  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.513039  445166 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-640282"
	I1205 06:11:52.513087  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.513545  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.518269  445166 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1205 06:11:52.519172  445166 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:11:52.519221  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 06:11:52.519335  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.531204  445166 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1205 06:11:52.531494  445166 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:11:52.531542  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 06:11:52.531644  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.537912  445166 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1205 06:11:52.541624  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 06:11:52.541644  445166 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 06:11:52.541706  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.565182  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 06:11:52.566648  445166 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1205 06:11:52.566673  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 06:11:52.566738  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.569829  445166 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1205 06:11:52.573081  445166 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:11:52.573156  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1205 06:11:52.573306  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.606669  445166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 06:11:52.606818  445166 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:11:52.606829  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1205 06:11:52.606884  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.612426  445166 addons.go:239] Setting addon default-storageclass=true in "addons-640282"
	I1205 06:11:52.612467  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.612870  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:11:52.617891  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:11:52.621548  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	W1205 06:11:52.621949  445166 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 06:11:52.622862  445166 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:11:52.623923  445166 out.go:179]   - Using image docker.io/registry:3.0.0
	I1205 06:11:52.641050  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 06:11:52.647387  445166 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:11:52.647411  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:11:52.647478  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.651737  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.654892  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:11:52.655563  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 06:11:52.655730  445166 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1205 06:11:52.661744  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1205 06:11:52.663489  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 06:11:52.663514  445166 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 06:11:52.663598  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.679162  445166 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1205 06:11:52.682547  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 06:11:52.682932  445166 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:11:52.682945  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 06:11:52.683044  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.689672  445166 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 06:11:52.689739  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 06:11:52.689834  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.709004  445166 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:11:52.709029  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1205 06:11:52.709099  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.714680  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.715161  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.715704  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 06:11:52.722532  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 06:11:52.726047  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 06:11:52.732974  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 06:11:52.736435  445166 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 06:11:52.742734  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 06:11:52.742768  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 06:11:52.742848  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.771644  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.806923  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.811340  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.811832  445166 out.go:179]   - Using image docker.io/busybox:stable
	I1205 06:11:52.815898  445166 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 06:11:52.818964  445166 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:11:52.818985  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 06:11:52.819049  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.820654  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.841525  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.868481  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.890502  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.898803  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.910052  445166 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:11:52.910073  445166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:11:52.910131  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:11:52.918526  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.920436  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.946926  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:11:52.957113  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	W1205 06:11:52.960625  445166 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:11:52.960659  445166 retry.go:31] will retry after 344.36104ms: ssh: handshake failed: EOF
	W1205 06:11:52.960730  445166 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:11:52.960738  445166 retry.go:31] will retry after 186.819772ms: ssh: handshake failed: EOF
	I1205 06:11:53.048252  445166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1205 06:11:53.149107  445166 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 06:11:53.149141  445166 retry.go:31] will retry after 495.766296ms: ssh: handshake failed: EOF
	I1205 06:11:53.544713  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 06:11:53.554182  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1205 06:11:53.555987  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 06:11:53.559318  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 06:11:53.562864  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 06:11:53.562929  445166 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 06:11:53.721940  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 06:11:53.732935  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:11:53.735892  445166 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 06:11:53.735957  445166 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 06:11:53.763261  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 06:11:53.763325  445166 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 06:11:53.938310  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 06:11:53.949740  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 06:11:53.953931  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 06:11:53.953955  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 06:11:53.958064  445166 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 06:11:53.958090  445166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 06:11:53.960836  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 06:11:53.960859  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 06:11:54.040893  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 06:11:54.040930  445166 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 06:11:54.043719  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 06:11:54.043744  445166 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 06:11:54.046659  445166 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:11:54.046681  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 06:11:54.118424  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 06:11:54.118480  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 06:11:54.164189  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:11:54.214426  445166 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 06:11:54.214472  445166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 06:11:54.222088  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 06:11:54.266455  445166 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.65975521s)
	I1205 06:11:54.266484  445166 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 06:11:54.266364  445166 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.218073401s)
	I1205 06:11:54.267956  445166 node_ready.go:35] waiting up to 6m0s for node "addons-640282" to be "Ready" ...
	I1205 06:11:54.320993  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 06:11:54.321023  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 06:11:54.322251  445166 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:11:54.322275  445166 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 06:11:54.376284  445166 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:11:54.376309  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 06:11:54.379673  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 06:11:54.489487  445166 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 06:11:54.489516  445166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 06:11:54.526310  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 06:11:54.574905  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 06:11:54.593215  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 06:11:54.593252  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 06:11:54.704295  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 06:11:54.704330  445166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 06:11:54.776375  445166 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-640282" context rescaled to 1 replicas
	I1205 06:11:54.843164  445166 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 06:11:54.843194  445166 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 06:11:54.933926  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.389124911s)
	I1205 06:11:54.939398  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 06:11:54.939430  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 06:11:55.057846  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.503578724s)
	I1205 06:11:55.090522  445166 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:11:55.090556  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 06:11:55.123243  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 06:11:55.123266  445166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 06:11:55.139941  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 06:11:55.139966  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 06:11:55.156141  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 06:11:55.156178  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 06:11:55.181909  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:11:55.251653  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.695581145s)
	I1205 06:11:55.308205  445166 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 06:11:55.308232  445166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 06:11:55.555846  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1205 06:11:56.283711  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:11:58.673843  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.951854418s)
	I1205 06:11:58.673997  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.941001908s)
	I1205 06:11:58.674019  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.114644447s)
	I1205 06:11:58.674129  445166 addons.go:495] Verifying addon ingress=true in "addons-640282"
	I1205 06:11:58.674211  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.724440437s)
	I1205 06:11:58.674678  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.492733676s)
	W1205 06:11:58.674718  445166 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:11:58.674747  445166 retry.go:31] will retry after 265.530233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 06:11:58.674334  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.51012143s)
	I1205 06:11:58.674356  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.452245226s)
	I1205 06:11:58.675034  445166 addons.go:495] Verifying addon registry=true in "addons-640282"
	I1205 06:11:58.674460  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.294763358s)
	I1205 06:11:58.674523  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.148185044s)
	I1205 06:11:58.675474  445166 addons.go:495] Verifying addon metrics-server=true in "addons-640282"
	I1205 06:11:58.674558  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.099611139s)
	I1205 06:11:58.674063  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.735731417s)
	I1205 06:11:58.678494  445166 out.go:179] * Verifying ingress addon...
	I1205 06:11:58.680333  445166 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-640282 service yakd-dashboard -n yakd-dashboard
	
	I1205 06:11:58.680409  445166 out.go:179] * Verifying registry addon...
	I1205 06:11:58.683425  445166 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 06:11:58.685297  445166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 06:11:58.692390  445166 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 06:11:58.692419  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:11:58.694268  445166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:11:58.694294  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1205 06:11:58.697691  445166 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	W1205 06:11:58.771820  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:11:58.914849  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.358950886s)
	I1205 06:11:58.914894  445166 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-640282"
	I1205 06:11:58.918107  445166 out.go:179] * Verifying csi-hostpath-driver addon...
	I1205 06:11:58.922024  445166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 06:11:58.935567  445166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:11:58.935610  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:11:58.940576  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 06:11:59.187765  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:11:59.188771  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:11:59.427310  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:11:59.689599  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:11:59.690081  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:11:59.925377  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:00.189713  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:00.234650  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:00.291026  445166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 06:12:00.291202  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:12:00.342095  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:12:00.426644  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:00.468633  445166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 06:12:00.485573  445166 addons.go:239] Setting addon gcp-auth=true in "addons-640282"
	I1205 06:12:00.485673  445166 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:12:00.486254  445166 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:12:00.505900  445166 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 06:12:00.505956  445166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:12:00.525689  445166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:12:00.687356  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:00.688264  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:00.925474  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:01.186989  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:01.189472  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1205 06:12:01.273512  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:01.426938  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:01.630178  445166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.689550114s)
	I1205 06:12:01.630327  445166 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.124405311s)
	I1205 06:12:01.633304  445166 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 06:12:01.636203  445166 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1205 06:12:01.639004  445166 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 06:12:01.639034  445166 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 06:12:01.653438  445166 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 06:12:01.653521  445166 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 06:12:01.667101  445166 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:12:01.667126  445166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 06:12:01.682968  445166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 06:12:01.689978  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:01.691003  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:01.925349  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:02.202421  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:02.202913  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:02.234466  445166 addons.go:495] Verifying addon gcp-auth=true in "addons-640282"
	I1205 06:12:02.237635  445166 out.go:179] * Verifying gcp-auth addon...
	I1205 06:12:02.241216  445166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 06:12:02.295955  445166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 06:12:02.295978  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:02.425089  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:02.686535  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:02.688752  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:02.744548  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:02.925544  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:03.187916  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:03.189331  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:03.243885  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:03.425505  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:03.689114  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:03.689917  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:03.745012  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:03.770891  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:03.925103  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:04.187130  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:04.187348  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:04.244354  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:04.426212  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:04.686707  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:04.688437  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:04.744346  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:04.925168  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:05.187244  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:05.188321  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:05.244198  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:05.425764  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:05.687233  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:05.689278  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:05.745145  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:05.925696  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:06.188376  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:06.188446  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:06.244462  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:06.271275  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:06.425308  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:06.686540  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:06.688724  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:06.745063  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:06.925961  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:07.187315  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:07.194612  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:07.244047  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:07.425869  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:07.687024  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:07.688907  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:07.744754  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:07.925702  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:08.188172  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:08.188968  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:08.244950  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:08.426196  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:08.688464  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:08.689420  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:08.744107  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:08.770903  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:08.924694  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:09.186574  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:09.188225  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:09.244048  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:09.424818  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:09.687728  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:09.687902  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:09.746650  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:09.924892  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:10.188280  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:10.188485  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:10.244252  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:10.425680  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:10.686837  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:10.689197  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:10.744173  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:10.925778  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:11.187904  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:11.188093  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:11.244787  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:11.271540  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:11.425371  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:11.688223  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:11.688458  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:11.744238  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:11.924688  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:12.186806  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:12.188569  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:12.244511  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:12.425299  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:12.686672  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:12.688546  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:12.744498  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:12.925188  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:13.187141  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:13.187574  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:13.244292  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:13.425856  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:13.689465  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:13.689700  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1205 06:12:13.771026  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:13.789231  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:13.924935  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:14.187611  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:14.189070  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:14.244782  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:14.430836  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:14.686970  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:14.688949  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:14.744814  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:14.925364  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:15.186564  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:15.188631  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:15.244696  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:15.425646  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:15.687398  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:15.689428  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:15.744417  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:15.771196  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:15.925416  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:16.186345  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:16.188538  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:16.244461  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:16.425725  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:16.688147  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:16.689234  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:16.745035  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:16.925906  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:17.186944  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:17.188706  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:17.244579  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:17.425143  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:17.687573  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:17.689453  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:17.744325  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:17.771367  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:17.925041  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:18.186971  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:18.188411  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:18.244299  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:18.425898  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:18.687147  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:18.687635  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:18.747352  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:18.926133  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:19.187620  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:19.188546  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:19.244121  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:19.425670  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:19.689130  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:19.689613  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:19.744529  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:19.771617  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:19.925643  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:20.188113  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:20.189342  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:20.243928  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:20.426120  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:20.687160  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:20.688196  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:20.745052  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:20.925793  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:21.187964  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:21.189151  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:21.251285  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:21.425026  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:21.688218  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:21.688532  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:21.744039  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:21.924761  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:22.186310  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:22.188758  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:22.244658  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:22.271419  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:22.425510  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:22.687150  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:22.688886  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:22.744856  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:22.925166  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:23.187142  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:23.188685  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:23.244383  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:23.425372  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:23.686931  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:23.688682  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:23.744612  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:23.925379  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:24.187030  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:24.188435  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:24.244526  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:24.271582  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:24.426962  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:24.687417  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:24.688062  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:24.744716  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:24.925901  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:25.186802  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:25.188953  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:25.244825  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:25.425356  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:25.687270  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:25.688704  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:25.744524  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:25.925446  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:26.187453  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:26.188588  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:26.244569  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:26.426021  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:26.687809  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:26.688737  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:26.744573  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:26.771730  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:26.925670  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:27.187093  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:27.189149  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:27.244939  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:27.424955  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:27.688472  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:27.688939  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:27.744751  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:27.925674  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:28.188274  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:28.190160  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:28.244087  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:28.425964  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:28.688663  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:28.688740  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:28.745091  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:28.924888  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:29.188077  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:29.188216  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:29.245015  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:29.271848  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:29.425555  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:29.686623  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:29.703810  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:29.745086  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:29.925808  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:30.187784  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:30.189728  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:30.244660  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:30.425607  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:30.686636  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:30.688368  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:30.745028  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:30.925280  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:31.188110  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:31.188164  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:31.245063  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:31.425057  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:31.687117  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:31.688527  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:31.744358  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1205 06:12:31.770974  445166 node_ready.go:57] node "addons-640282" has "Ready":"False" status (will retry)
	I1205 06:12:31.925181  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:32.188130  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:32.188341  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:32.244008  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:32.425738  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:32.687609  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:32.689345  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:32.744417  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:32.925014  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:33.187149  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:33.187951  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:33.245233  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:33.424913  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:33.687233  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:33.688337  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:33.744123  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:33.925852  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:34.230985  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:34.243048  445166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 06:12:34.243071  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:34.256453  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:34.276457  445166 node_ready.go:49] node "addons-640282" is "Ready"
	I1205 06:12:34.276487  445166 node_ready.go:38] duration metric: took 40.008506714s for node "addons-640282" to be "Ready" ...
	I1205 06:12:34.276503  445166 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:12:34.276565  445166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:12:34.305036  445166 api_server.go:72] duration metric: took 42.008680166s to wait for apiserver process to appear ...
	I1205 06:12:34.305062  445166 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:12:34.305080  445166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 06:12:34.315177  445166 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 06:12:34.316864  445166 api_server.go:141] control plane version: v1.34.2
	I1205 06:12:34.316892  445166 api_server.go:131] duration metric: took 11.824403ms to wait for apiserver health ...
	I1205 06:12:34.316902  445166 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:12:34.327504  445166 system_pods.go:59] 19 kube-system pods found
	I1205 06:12:34.327543  445166 system_pods.go:61] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:34.327551  445166 system_pods.go:61] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending
	I1205 06:12:34.327557  445166 system_pods.go:61] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending
	I1205 06:12:34.327561  445166 system_pods.go:61] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:34.327566  445166 system_pods.go:61] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:34.327569  445166 system_pods.go:61] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:34.327573  445166 system_pods.go:61] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:34.327578  445166 system_pods.go:61] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:34.327582  445166 system_pods.go:61] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending
	I1205 06:12:34.327585  445166 system_pods.go:61] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:34.327589  445166 system_pods.go:61] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:34.327593  445166 system_pods.go:61] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending
	I1205 06:12:34.327600  445166 system_pods.go:61] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending
	I1205 06:12:34.327603  445166 system_pods.go:61] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending
	I1205 06:12:34.327609  445166 system_pods.go:61] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:34.327619  445166 system_pods.go:61] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending
	I1205 06:12:34.327623  445166 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending
	I1205 06:12:34.327628  445166 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending
	I1205 06:12:34.327637  445166 system_pods.go:61] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending
	I1205 06:12:34.327642  445166 system_pods.go:74] duration metric: took 10.735719ms to wait for pod list to return data ...
	I1205 06:12:34.327650  445166 default_sa.go:34] waiting for default service account to be created ...
	I1205 06:12:34.332676  445166 default_sa.go:45] found service account: "default"
	I1205 06:12:34.332702  445166 default_sa.go:55] duration metric: took 5.043176ms for default service account to be created ...
	I1205 06:12:34.332711  445166 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 06:12:34.366121  445166 system_pods.go:86] 19 kube-system pods found
	I1205 06:12:34.366157  445166 system_pods.go:89] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:34.366164  445166 system_pods.go:89] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending
	I1205 06:12:34.366169  445166 system_pods.go:89] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending
	I1205 06:12:34.366173  445166 system_pods.go:89] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:34.366178  445166 system_pods.go:89] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:34.366183  445166 system_pods.go:89] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:34.366188  445166 system_pods.go:89] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:34.366192  445166 system_pods.go:89] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:34.366200  445166 system_pods.go:89] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending
	I1205 06:12:34.366204  445166 system_pods.go:89] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:34.366212  445166 system_pods.go:89] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:34.366217  445166 system_pods.go:89] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending
	I1205 06:12:34.366226  445166 system_pods.go:89] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending
	I1205 06:12:34.366230  445166 system_pods.go:89] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending
	I1205 06:12:34.366236  445166 system_pods.go:89] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:34.366244  445166 system_pods.go:89] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending
	I1205 06:12:34.366249  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending
	I1205 06:12:34.366253  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending
	I1205 06:12:34.366257  445166 system_pods.go:89] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending
	I1205 06:12:34.366273  445166 retry.go:31] will retry after 204.834201ms: missing components: kube-dns
	I1205 06:12:34.436267  445166 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 06:12:34.436302  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:34.661676  445166 system_pods.go:86] 19 kube-system pods found
	I1205 06:12:34.661716  445166 system_pods.go:89] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:34.661724  445166 system_pods.go:89] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending
	I1205 06:12:34.661730  445166 system_pods.go:89] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending
	I1205 06:12:34.661734  445166 system_pods.go:89] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:34.661738  445166 system_pods.go:89] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:34.661743  445166 system_pods.go:89] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:34.661748  445166 system_pods.go:89] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:34.661754  445166 system_pods.go:89] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:34.661761  445166 system_pods.go:89] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending
	I1205 06:12:34.661765  445166 system_pods.go:89] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:34.661772  445166 system_pods.go:89] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:34.661776  445166 system_pods.go:89] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending
	I1205 06:12:34.661780  445166 system_pods.go:89] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending
	I1205 06:12:34.661786  445166 system_pods.go:89] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:12:34.661798  445166 system_pods.go:89] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:34.661805  445166 system_pods.go:89] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending
	I1205 06:12:34.661815  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending
	I1205 06:12:34.661822  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:12:34.661826  445166 system_pods.go:89] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending
	I1205 06:12:34.661843  445166 retry.go:31] will retry after 329.681398ms: missing components: kube-dns
	I1205 06:12:34.687599  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:34.689024  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:34.746601  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:34.950436  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:35.023931  445166 system_pods.go:86] 19 kube-system pods found
	I1205 06:12:35.023971  445166 system_pods.go:89] "coredns-66bc5c9577-jbbkj" [39953dbb-d6ca-4ab3-8cf6-813f34ff8300] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:12:35.023982  445166 system_pods.go:89] "csi-hostpath-attacher-0" [9c77214b-2beb-4b3b-a27e-d931964d5896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 06:12:35.023991  445166 system_pods.go:89] "csi-hostpath-resizer-0" [e01ccef9-500a-42ff-898f-237d294cc5fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 06:12:35.023996  445166 system_pods.go:89] "csi-hostpathplugin-dqw5d" [eadeb7c0-b891-4b7e-afd3-01dd3ddab0e6] Pending
	I1205 06:12:35.024002  445166 system_pods.go:89] "etcd-addons-640282" [9cb8f125-ac33-4924-bf3e-eca210a109e6] Running
	I1205 06:12:35.024006  445166 system_pods.go:89] "kindnet-bz4mm" [12acfe02-ab61-4731-8cd9-706bed829f72] Running
	I1205 06:12:35.024012  445166 system_pods.go:89] "kube-apiserver-addons-640282" [8c5f7608-6243-486d-bb47-7263cce0ebfe] Running
	I1205 06:12:35.024021  445166 system_pods.go:89] "kube-controller-manager-addons-640282" [69f9da50-4679-4821-9731-957b7c7648d1] Running
	I1205 06:12:35.024029  445166 system_pods.go:89] "kube-ingress-dns-minikube" [88d7f75c-a3a0-4f0e-9f39-ab17fe643e1b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 06:12:35.024039  445166 system_pods.go:89] "kube-proxy-lnnkp" [f1dd8ed0-13f5-4c25-9c38-b3120e023e4d] Running
	I1205 06:12:35.024044  445166 system_pods.go:89] "kube-scheduler-addons-640282" [9dc9e0c2-dbc3-44bd-8620-63bfd21e2b0c] Running
	I1205 06:12:35.024050  445166 system_pods.go:89] "metrics-server-85b7d694d7-wmgpc" [df8dbdd7-a479-473e-a349-d60d3ec907bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 06:12:35.024062  445166 system_pods.go:89] "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1205 06:12:35.024067  445166 system_pods.go:89] "registry-6b586f9694-4sckq" [712999ac-5491-44f0-9f17-8323f282a76e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 06:12:35.024073  445166 system_pods.go:89] "registry-creds-764b6fb674-4zwkd" [fc52dd35-e9bf-4770-aa18-66aac8d15c08] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1205 06:12:35.024083  445166 system_pods.go:89] "registry-proxy-nlqwm" [cd96544e-ef6d-4af2-9913-1cf334dcaf35] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 06:12:35.024089  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7kcmn" [c7f4f7ee-47b3-4ea5-8a98-330ef56c69af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:12:35.024096  445166 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8h7tn" [e5fb8aee-2bb3-4d74-825d-51f6334ea308] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 06:12:35.024102  445166 system_pods.go:89] "storage-provisioner" [2ea4d940-93f0-4289-92ad-33c4d063e981] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 06:12:35.024112  445166 system_pods.go:126] duration metric: took 691.395496ms to wait for k8s-apps to be running ...
	I1205 06:12:35.024153  445166 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 06:12:35.024223  445166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:12:35.056582  445166 system_svc.go:56] duration metric: took 32.421216ms WaitForService to wait for kubelet
	I1205 06:12:35.056651  445166 kubeadm.go:587] duration metric: took 42.760298438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:12:35.056684  445166 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:12:35.065822  445166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 06:12:35.065898  445166 node_conditions.go:123] node cpu capacity is 2
	I1205 06:12:35.065940  445166 node_conditions.go:105] duration metric: took 9.232337ms to run NodePressure ...
	I1205 06:12:35.065966  445166 start.go:242] waiting for startup goroutines ...
	I1205 06:12:35.190242  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:35.191685  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:35.291022  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:35.426531  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:35.690167  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:35.690204  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:35.744287  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:35.925260  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:36.188828  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:36.188918  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:36.288568  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:36.426202  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:36.690024  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:36.690481  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:36.745111  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:36.925596  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:37.198726  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:37.199260  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:37.292441  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:37.426252  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:37.688421  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:37.689354  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:37.744744  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:37.926581  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:38.189726  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:38.190121  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:38.245214  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:38.426084  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:38.690210  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:38.690686  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:38.789571  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:38.932287  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:39.193020  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:39.193457  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:39.244775  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:39.427611  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:39.692983  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:39.693448  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:39.744720  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:39.929153  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:40.196119  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:40.197581  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:40.245146  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:40.427235  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:40.690519  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:40.691322  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:40.789863  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:40.926244  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:41.189598  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:41.189735  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:41.244780  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:41.426262  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:41.686889  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:41.689270  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:41.744358  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:41.925413  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:42.190287  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:42.191988  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:42.245899  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:42.426907  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:42.689160  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:42.689460  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:42.744654  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:42.927435  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:43.187832  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:43.188954  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:43.244896  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:43.426296  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:43.688942  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:43.689096  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:43.744731  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:43.925549  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:44.187616  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:44.188634  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:44.244544  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:44.426350  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:44.686767  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:44.689313  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:44.744626  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:44.925842  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:45.190191  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:45.190306  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:45.245006  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:45.435911  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:45.687798  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:45.688320  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:45.745178  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:45.925865  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:46.190967  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:46.191499  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:46.245936  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:46.428356  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:46.687342  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:46.689632  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:46.744972  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:46.925759  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:47.189211  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:47.189333  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:47.244591  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:47.425992  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:47.688923  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:47.689635  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:47.745043  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:47.925844  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:48.187629  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:48.188699  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:48.244690  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:48.425487  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:48.687431  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:48.690042  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:48.745267  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:48.925976  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:49.189537  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:49.189933  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:49.245115  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:49.425754  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:49.688778  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:49.688930  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:49.744646  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:49.925664  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:50.188244  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:50.189790  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:50.244844  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:50.426006  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:50.688316  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:50.688644  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:50.744726  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:50.926672  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:51.188835  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:51.190335  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:51.244445  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:51.426161  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:51.689057  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:51.689224  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:51.744425  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:51.926410  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:52.187348  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:52.190095  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:52.246645  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:52.428212  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:52.689517  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:52.690506  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:52.744773  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:52.926015  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:53.189787  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:53.190219  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:53.244076  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:53.425404  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:53.689630  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:53.689873  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:53.744682  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:53.926455  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:54.188149  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:54.190333  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:54.244783  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:54.430011  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:54.689668  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:54.690053  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:54.745650  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:54.927716  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:55.188916  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:55.191200  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:55.247879  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:55.427730  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:55.689979  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:55.690266  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:55.745198  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:55.925152  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:56.189401  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:56.189668  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:56.245709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:56.427160  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:56.687118  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:56.688442  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:56.744999  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:56.925894  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:57.187699  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:57.188977  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:57.244864  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:57.426573  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:57.693372  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:57.693700  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:57.744572  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:57.925912  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:58.201099  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:58.204488  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:58.292224  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:58.425620  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:58.687104  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:58.689411  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:58.744638  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:58.926511  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:59.186813  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:59.189852  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:59.244894  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:59.425941  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:12:59.687657  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:12:59.689638  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:12:59.744462  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:12:59.926349  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:00.206351  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:00.206566  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:00.275875  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:00.432930  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:00.688043  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:00.689388  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:00.744643  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:00.926069  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:01.189887  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:01.190146  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:01.289712  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:01.425799  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:01.690320  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:01.690532  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:01.744570  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:01.925940  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:02.188616  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:02.188793  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:02.244999  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:02.427606  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:02.688042  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:02.689123  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:02.745183  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:02.925358  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:03.188835  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:03.190164  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:03.244965  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:03.425660  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:03.689416  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:03.689594  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:03.744536  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:03.926021  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:04.187143  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:04.188893  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:04.245114  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:04.427279  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:04.688865  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:04.689040  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:04.745224  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:04.925285  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:05.191339  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:05.191599  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:05.280721  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:05.426596  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:05.687374  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:05.689307  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:05.745052  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:05.925197  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:06.187300  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:06.187583  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:06.246439  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:06.426496  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:06.689516  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:06.690969  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:06.745695  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:06.929784  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:07.189611  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:07.190062  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:07.289793  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:07.426212  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:07.690091  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:07.690511  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:07.744621  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:07.926243  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:08.186815  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:08.189130  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:08.244515  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:08.426703  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:08.686702  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:08.688715  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:08.744999  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:08.925412  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:09.187464  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:09.188340  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 06:13:09.244577  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:09.426860  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:09.691259  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:09.692969  445166 kapi.go:107] duration metric: took 1m11.007671445s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 06:13:09.745792  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:09.926522  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:10.193284  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:10.244709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:10.426138  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:10.687025  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:10.745480  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:10.926263  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:11.187027  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:11.245257  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:11.425893  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:11.687607  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:11.744851  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:11.926011  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:12.187582  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:12.244891  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:12.426699  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:12.687411  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:12.787924  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:12.925942  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:13.188676  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:13.244217  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:13.428473  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:13.687134  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:13.745318  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:13.925623  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:14.187424  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:14.288440  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:14.427644  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:14.687692  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:14.745621  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:14.926623  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:15.187426  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:15.244769  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:15.429504  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:15.688099  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:15.746094  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:15.928359  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:16.187204  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:16.243984  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:16.426262  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:16.688314  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:16.745347  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:16.925881  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:17.187289  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:17.244450  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:17.425827  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:17.687071  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:17.745141  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:17.926142  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:18.187141  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:18.244881  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:18.425880  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:18.686946  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:18.744955  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:18.925606  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:19.187301  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:19.244769  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:19.429270  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:19.687495  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:19.744036  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:19.926538  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:20.187530  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:20.244946  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:20.435603  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:20.687074  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:20.745055  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:20.925412  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:21.191395  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:21.244884  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:21.426124  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:21.686882  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:21.745513  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:21.926276  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:22.188696  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:22.245429  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:22.426342  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:22.687840  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:22.744956  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:22.925893  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:23.188262  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:23.245230  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:23.425151  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:23.687353  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:23.746059  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:23.925521  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:24.187480  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:24.245265  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:24.425544  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:24.686497  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:24.744409  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:24.925685  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:25.186699  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:25.244575  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:25.426313  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:25.686990  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:25.787042  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:25.926426  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:26.187420  445166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 06:13:26.244705  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:26.425648  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:26.687184  445166 kapi.go:107] duration metric: took 1m28.003757366s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 06:13:26.745031  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:26.925390  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:27.244709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:27.426205  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:27.744896  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:27.925926  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:28.244324  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:28.425610  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:28.744997  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:28.926145  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:29.244620  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:29.427720  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:29.745091  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:29.925456  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:30.244484  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:30.425570  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:30.744480  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:30.938092  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:31.244417  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:31.426072  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 06:13:31.744477  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:31.926308  445166 kapi.go:107] duration metric: took 1m33.004287175s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 06:13:32.244569  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:32.745333  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:33.244709  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:33.744097  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:34.245485  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:34.745224  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:35.244777  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:35.745118  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:36.244541  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:36.744252  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:37.245092  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:37.744332  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:38.245451  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:38.745106  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:39.244313  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:39.745163  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:40.244570  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:40.745435  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:41.244851  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:41.744098  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:42.246517  445166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 06:13:42.745332  445166 kapi.go:107] duration metric: took 1m40.504115494s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 06:13:42.748292  445166 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-640282 cluster.
	I1205 06:13:42.751060  445166 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 06:13:42.753829  445166 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 06:13:42.756796  445166 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1205 06:13:42.759767  445166 addons.go:530] duration metric: took 1m50.463018388s for enable addons: enabled=[nvidia-device-plugin registry-creds amd-gpu-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1205 06:13:42.759826  445166 start.go:247] waiting for cluster config update ...
	I1205 06:13:42.759847  445166 start.go:256] writing updated cluster config ...
	I1205 06:13:42.760141  445166 ssh_runner.go:195] Run: rm -f paused
	I1205 06:13:42.764527  445166 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:13:42.768283  445166 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jbbkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.772723  445166 pod_ready.go:94] pod "coredns-66bc5c9577-jbbkj" is "Ready"
	I1205 06:13:42.772753  445166 pod_ready.go:86] duration metric: took 4.442269ms for pod "coredns-66bc5c9577-jbbkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.774797  445166 pod_ready.go:83] waiting for pod "etcd-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.778762  445166 pod_ready.go:94] pod "etcd-addons-640282" is "Ready"
	I1205 06:13:42.778790  445166 pod_ready.go:86] duration metric: took 3.968491ms for pod "etcd-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.780921  445166 pod_ready.go:83] waiting for pod "kube-apiserver-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.785038  445166 pod_ready.go:94] pod "kube-apiserver-addons-640282" is "Ready"
	I1205 06:13:42.785061  445166 pod_ready.go:86] duration metric: took 4.116087ms for pod "kube-apiserver-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:42.787266  445166 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.168467  445166 pod_ready.go:94] pod "kube-controller-manager-addons-640282" is "Ready"
	I1205 06:13:43.168493  445166 pod_ready.go:86] duration metric: took 381.207785ms for pod "kube-controller-manager-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.369268  445166 pod_ready.go:83] waiting for pod "kube-proxy-lnnkp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.768018  445166 pod_ready.go:94] pod "kube-proxy-lnnkp" is "Ready"
	I1205 06:13:43.768056  445166 pod_ready.go:86] duration metric: took 398.762418ms for pod "kube-proxy-lnnkp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:43.968338  445166 pod_ready.go:83] waiting for pod "kube-scheduler-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:44.368921  445166 pod_ready.go:94] pod "kube-scheduler-addons-640282" is "Ready"
	I1205 06:13:44.368954  445166 pod_ready.go:86] duration metric: took 400.591017ms for pod "kube-scheduler-addons-640282" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:13:44.368969  445166 pod_ready.go:40] duration metric: took 1.60440847s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:13:44.423652  445166 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1205 06:13:44.427268  445166 out.go:179] * Done! kubectl is now configured to use "addons-640282" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.640231165Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:91a020e46ac4fb31a82c646fb6ee5c426b8ff3aa9db2c219c7b94d4a092e8448 UID:4c88078d-a560-4a11-ba24-8bb270c20468 NetNS:/var/run/netns/df6f6649-76ed-491e-8cfc-dd4983d553e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000bcbf8}] Aliases:map[]}"
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.640392389Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.643736226Z" level=info msg="Ran pod sandbox 91a020e46ac4fb31a82c646fb6ee5c426b8ff3aa9db2c219c7b94d4a092e8448 with infra container: default/busybox/POD" id=e6a230a2-c0fd-4649-9205-4ae62074cab5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.648356499Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6f7d9073-61ae-44df-9403-ff822b7cd346 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.648618179Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6f7d9073-61ae-44df-9403-ff822b7cd346 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.648723058Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6f7d9073-61ae-44df-9403-ff822b7cd346 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.649595674Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f7f0e80-d859-4b7b-9f7d-49d0bea2c740 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:13:45 addons-640282 crio[826]: time="2025-12-05T06:13:45.65103157Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.655723035Z" level=info msg="Removing container: 3947cbf40b9ad541f29d5ed9aaa65645f4251a0b8e9f7d0daf12c4de613458b1" id=35134732-d8c4-4615-966c-42f91ffa5eaf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.66300365Z" level=info msg="Error loading conmon cgroup of container 3947cbf40b9ad541f29d5ed9aaa65645f4251a0b8e9f7d0daf12c4de613458b1: cgroup deleted" id=35134732-d8c4-4615-966c-42f91ffa5eaf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.697216273Z" level=info msg="Removed container 3947cbf40b9ad541f29d5ed9aaa65645f4251a0b8e9f7d0daf12c4de613458b1: gcp-auth/gcp-auth-certs-create-t4x68/create" id=35134732-d8c4-4615-966c-42f91ffa5eaf name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.709167393Z" level=info msg="Stopping pod sandbox: 5bc76fbf5aceb6631eaaad91bb8da7a82682774d961f79860dc08b605f1ee306" id=928c910c-7b51-4c2a-af79-f619c1f0d825 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.709408733Z" level=info msg="Stopped pod sandbox (already stopped): 5bc76fbf5aceb6631eaaad91bb8da7a82682774d961f79860dc08b605f1ee306" id=928c910c-7b51-4c2a-af79-f619c1f0d825 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.710033453Z" level=info msg="Removing pod sandbox: 5bc76fbf5aceb6631eaaad91bb8da7a82682774d961f79860dc08b605f1ee306" id=f4ebea18-db1b-416f-acb8-71e839f6b4ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.71991663Z" level=info msg="Removed pod sandbox: 5bc76fbf5aceb6631eaaad91bb8da7a82682774d961f79860dc08b605f1ee306" id=f4ebea18-db1b-416f-acb8-71e839f6b4ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.780175474Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1f7f0e80-d859-4b7b-9f7d-49d0bea2c740 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.780939995Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=86e2082f-e27f-403b-b029-b3d916fe74a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.784943547Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e25d2193-b7b4-49ec-a48b-6176505f0596 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.791881341Z" level=info msg="Creating container: default/busybox/busybox" id=200afb54-c8bb-4e05-b5a8-636bd98393c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.792001162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.798913855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.799620587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.818989218Z" level=info msg="Created container f463a8307b8ec412d9ed199103f30c449f073fb8b466025cdabb4fa09c649bd5: default/busybox/busybox" id=200afb54-c8bb-4e05-b5a8-636bd98393c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.835998531Z" level=info msg="Starting container: f463a8307b8ec412d9ed199103f30c449f073fb8b466025cdabb4fa09c649bd5" id=9c255fb8-47f1-40c4-858e-25f5458f297b name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 06:13:47 addons-640282 crio[826]: time="2025-12-05T06:13:47.840725004Z" level=info msg="Started container" PID=4943 containerID=f463a8307b8ec412d9ed199103f30c449f073fb8b466025cdabb4fa09c649bd5 description=default/busybox/busybox id=9c255fb8-47f1-40c4-858e-25f5458f297b name=/runtime.v1.RuntimeService/StartContainer sandboxID=91a020e46ac4fb31a82c646fb6ee5c426b8ff3aa9db2c219c7b94d4a092e8448
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	f463a8307b8ec       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   91a020e46ac4f       busybox                                    default
	f0661aef38682       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 15 seconds ago       Running             gcp-auth                                 0                   91ec0cd7cdf85       gcp-auth-78565c9fb4-xdcct                  gcp-auth
	ae8fe59a87c4c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          26 seconds ago       Running             csi-snapshotter                          0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	ee08f2df7a0e7       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          28 seconds ago       Running             csi-provisioner                          0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	1343c4e249efa       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            29 seconds ago       Running             liveness-probe                           0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	576b9f44bab0b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           30 seconds ago       Running             hostpath                                 0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	dd155a05dcc71       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             31 seconds ago       Running             controller                               0                   1f9299b37c527       ingress-nginx-controller-6c8bf45fb-r8d6p   ingress-nginx
	a02282a8dda4a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            38 seconds ago       Running             gadget                                   0                   2fe794cafa0df       gadget-62wxv                               gadget
	36207d2abda3a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                43 seconds ago       Running             node-driver-registrar                    0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	8b3a6f3e68b04       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             44 seconds ago       Exited              patch                                    1                   f791b250113c2       gcp-auth-certs-patch-2fhtg                 gcp-auth
	fdc19967900f7       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               45 seconds ago       Running             cloud-spanner-emulator                   0                   d420f5d040a14       cloud-spanner-emulator-5bdddb765-4xzt7     default
	8ca95d8216ff9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              49 seconds ago       Running             registry-proxy                           0                   a892af72684ee       registry-proxy-nlqwm                       kube-system
	e05ecd19c0205       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              52 seconds ago       Running             csi-resizer                              0                   50781fa0e7e0c       csi-hostpath-resizer-0                     kube-system
	2c117baff7e0b       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     53 seconds ago       Running             nvidia-device-plugin-ctr                 0                   43dea51649ec5       nvidia-device-plugin-daemonset-ft52z       kube-system
	1e9a9d06da060       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           58 seconds ago       Running             registry                                 0                   2f1e5e72ebe6e       registry-6b586f9694-4sckq                  kube-system
	dfc4a73354f1a       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             59 seconds ago       Exited              patch                                    1                   3a8fb17d68cf6       ingress-nginx-admission-patch-rl2nx        ingress-nginx
	66ce1a90aa991       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   59 seconds ago       Exited              create                                   0                   dd1b4999e33fe       ingress-nginx-admission-create-w7jhq       ingress-nginx
	eb58125e4d2c7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   acbd0037edc74       snapshot-controller-7d9fbc56b8-7kcmn       kube-system
	249d0f3d91c82       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   540e304ec1231       csi-hostpathplugin-dqw5d                   kube-system
	56309b0868051       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   7d0504f045ecc       kube-ingress-dns-minikube                  kube-system
	513dee4bbb57b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   c60143880a783       snapshot-controller-7d9fbc56b8-8h7tn       kube-system
	2798958d4d891       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   b5a421be82bb3       yakd-dashboard-5ff678cb9-tgblp             yakd-dashboard
	75d36e5745352       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   ea5db399fbede       csi-hostpath-attacher-0                    kube-system
	18303e803325e       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   84b1c08cbaa7c       metrics-server-85b7d694d7-wmgpc            kube-system
	b703017818da9       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   b65f4b9f35b44       local-path-provisioner-648f6765c9-ndg8q    local-path-storage
	8f819a6511b2f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   57c799304a7d8       storage-provisioner                        kube-system
	9e50a765cdd0b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   cf9c182794ffe       coredns-66bc5c9577-jbbkj                   kube-system
	954b5a1cbede7       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   9385f60acfa47       kube-proxy-lnnkp                           kube-system
	afa775c377245       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   44a84584e9a3e       kindnet-bz4mm                              kube-system
	dbaf492de7d0d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   8f1302c750456       kube-scheduler-addons-640282               kube-system
	130424b6298d0       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   92f507b818ad7       kube-controller-manager-addons-640282      kube-system
	ce5973768e215       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   54b06010ae3b0       etcd-addons-640282                         kube-system
	6a73bdffbbb7c       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   0966444cb3d55       kube-apiserver-addons-640282               kube-system
	
	
	==> coredns [9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84] <==
	[INFO] 10.244.0.8:34004 - 49718 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000350379s
	[INFO] 10.244.0.8:34004 - 14398 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002040874s
	[INFO] 10.244.0.8:34004 - 9474 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002018736s
	[INFO] 10.244.0.8:34004 - 3470 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000152592s
	[INFO] 10.244.0.8:34004 - 37296 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000200585s
	[INFO] 10.244.0.8:44865 - 5170 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157179s
	[INFO] 10.244.0.8:44865 - 4965 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104231s
	[INFO] 10.244.0.8:37078 - 13901 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085391s
	[INFO] 10.244.0.8:37078 - 14098 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179817s
	[INFO] 10.244.0.8:45875 - 56872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102614s
	[INFO] 10.244.0.8:45875 - 56707 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148998s
	[INFO] 10.244.0.8:36042 - 56699 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00163534s
	[INFO] 10.244.0.8:36042 - 56527 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001752116s
	[INFO] 10.244.0.8:44496 - 41356 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133236s
	[INFO] 10.244.0.8:44496 - 41175 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165598s
	[INFO] 10.244.0.21:33361 - 20419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178307s
	[INFO] 10.244.0.21:59574 - 59840 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017775s
	[INFO] 10.244.0.21:53634 - 64002 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000240676s
	[INFO] 10.244.0.21:39121 - 22281 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000350067s
	[INFO] 10.244.0.21:37559 - 15277 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120542s
	[INFO] 10.244.0.21:37560 - 33087 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132777s
	[INFO] 10.244.0.21:55444 - 54378 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00222335s
	[INFO] 10.244.0.21:35327 - 16481 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002373144s
	[INFO] 10.244.0.21:54324 - 24210 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001231643s
	[INFO] 10.244.0.21:42400 - 9481 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001466789s
	
	
	==> describe nodes <==
	Name:               addons-640282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-640282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=addons-640282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_11_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-640282
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-640282"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:11:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-640282
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:13:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:13:50 +0000   Fri, 05 Dec 2025 06:11:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:13:50 +0000   Fri, 05 Dec 2025 06:11:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:13:50 +0000   Fri, 05 Dec 2025 06:11:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:13:50 +0000   Fri, 05 Dec 2025 06:12:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-640282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                aa3571c6-896a-4255-aadd-3629cc6297b8
	  Boot ID:                    6438d548-ea0a-487b-93bc-8af12c014d83
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-4xzt7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gadget                      gadget-62wxv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  gcp-auth                    gcp-auth-78565c9fb4-xdcct                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-r8d6p    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         119s
	  kube-system                 coredns-66bc5c9577-jbbkj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m4s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 csi-hostpathplugin-dqw5d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 etcd-addons-640282                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-bz4mm                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-addons-640282                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-addons-640282       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-lnnkp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-addons-640282                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 metrics-server-85b7d694d7-wmgpc             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m
	  kube-system                 nvidia-device-plugin-daemonset-ft52z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 registry-6b586f9694-4sckq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 registry-creds-764b6fb674-4zwkd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-proxy-nlqwm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 snapshot-controller-7d9fbc56b8-7kcmn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 snapshot-controller-7d9fbc56b8-8h7tn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  local-path-storage          local-path-provisioner-648f6765c9-ndg8q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tgblp              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m3s   kube-proxy       
	  Normal   Starting                 2m10s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m10s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s  kubelet          Node addons-640282 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s  kubelet          Node addons-640282 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s  kubelet          Node addons-640282 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m5s   node-controller  Node addons-640282 event: Registered Node addons-640282 in Controller
	  Normal   NodeReady                83s    kubelet          Node addons-640282 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 03:17] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014702] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514036] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693] <==
	{"level":"warn","ts":"2025-12-05T06:11:43.741309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.754872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.771050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.793570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.810169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.859708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.912178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.917660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.923715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.938816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.967934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:43.972092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.010574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.036840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.043702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.077502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.100071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.119459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:44.211797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:59.229166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:11:59.239579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.124449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.139080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.170616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:12:22.182056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40506","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f0661aef386820b0e5bb007bc6340e7a81a61798c8a5d657bde32c3a21a9ec07] <==
	2025/12/05 06:13:42 GCP Auth Webhook started!
	2025/12/05 06:13:44 Ready to marshal response ...
	2025/12/05 06:13:44 Ready to write response ...
	2025/12/05 06:13:45 Ready to marshal response ...
	2025/12/05 06:13:45 Ready to write response ...
	2025/12/05 06:13:45 Ready to marshal response ...
	2025/12/05 06:13:45 Ready to write response ...
	
	
	==> kernel <==
	 06:13:58 up  2:56,  0 user,  load average: 2.39, 2.00, 1.72
	Linux addons-640282 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b] <==
	E1205 06:12:23.935016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1205 06:12:23.937515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1205 06:12:23.937515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1205 06:12:23.938767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1205 06:12:25.338300       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 06:12:25.338347       1 metrics.go:72] Registering metrics
	I1205 06:12:25.338423       1 controller.go:711] "Syncing nftables rules"
	I1205 06:12:33.938562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:12:33.938624       1 main.go:301] handling current node
	I1205 06:12:43.935860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:12:43.935892       1 main.go:301] handling current node
	I1205 06:12:53.934458       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:12:53.934519       1 main.go:301] handling current node
	I1205 06:13:03.935485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:13:03.935518       1 main.go:301] handling current node
	I1205 06:13:13.935354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:13:13.935398       1 main.go:301] handling current node
	I1205 06:13:23.934468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:13:23.934499       1 main.go:301] handling current node
	I1205 06:13:33.934501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:13:33.934565       1 main.go:301] handling current node
	I1205 06:13:43.934819       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:13:43.934856       1 main.go:301] handling current node
	I1205 06:13:53.934481       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:13:53.934550       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e] <==
	E1205 06:12:40.402228       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.194.127:443: connect: connection refused" logger="UnhandledError"
	W1205 06:12:40.402853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:12:40.402914       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 06:12:40.403972       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.194.127:443: connect: connection refused" logger="UnhandledError"
	W1205 06:12:41.404209       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:12:41.404271       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 06:12:41.404286       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 06:12:41.404327       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:12:41.404387       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 06:12:41.405480       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 06:12:45.327260       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 06:12:45.415788       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 06:12:45.415915       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 06:12:45.415997       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.194.127:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	E1205 06:12:45.471447       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1205 06:13:55.925710       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40840: use of closed network connection
	
	
	==> kube-controller-manager [130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438] <==
	I1205 06:11:52.154813       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 06:11:52.155156       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1205 06:11:52.155218       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 06:11:52.155270       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1205 06:11:52.155518       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 06:11:52.156083       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:11:52.158453       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:11:52.158503       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 06:11:52.158533       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 06:11:52.161574       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:11:52.161642       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:11:52.161681       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:11:52.161692       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:11:52.161697       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:11:52.168284       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:11:52.173631       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-640282" podCIDRs=["10.244.0.0/24"]
	E1205 06:11:57.516871       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1205 06:12:22.117351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 06:12:22.117504       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1205 06:12:22.117548       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1205 06:12:22.155177       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1205 06:12:22.160280       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1205 06:12:22.218065       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:12:22.260477       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:12:37.145786       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd] <==
	I1205 06:11:53.866345       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:11:53.967905       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:11:54.068544       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:11:54.068591       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:11:54.068662       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:11:54.112977       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:11:54.113050       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:11:54.127340       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:11:54.129996       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:11:54.130021       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:11:54.135841       1 config.go:200] "Starting service config controller"
	I1205 06:11:54.135863       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:11:54.135886       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:11:54.135891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:11:54.135909       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:11:54.135914       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:11:54.148906       1 config.go:309] "Starting node config controller"
	I1205 06:11:54.148928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:11:54.148936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:11:54.236405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:11:54.236449       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:11:54.236487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3] <==
	I1205 06:11:45.780403       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:11:45.780493       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:11:45.780814       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 06:11:45.785574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1205 06:11:45.786597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1205 06:11:45.792055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 06:11:45.792730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 06:11:45.792824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 06:11:45.792881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 06:11:45.793689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 06:11:45.798591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 06:11:45.798671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 06:11:45.798725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 06:11:45.798818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1205 06:11:45.798837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 06:11:45.798881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 06:11:45.798915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 06:11:45.798979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 06:11:45.799044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 06:11:45.799048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 06:11:45.799175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 06:11:45.799240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 06:11:45.799383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 06:11:46.600212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1205 06:11:49.180905       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:13:14 addons-640282 kubelet[1272]: I1205 06:13:14.501943    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xx96\" (UniqueName: \"kubernetes.io/projected/c72424af-2a1c-404d-a94f-300507c646db-kube-api-access-7xx96\") pod \"c72424af-2a1c-404d-a94f-300507c646db\" (UID: \"c72424af-2a1c-404d-a94f-300507c646db\") "
	Dec 05 06:13:14 addons-640282 kubelet[1272]: I1205 06:13:14.510093    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c72424af-2a1c-404d-a94f-300507c646db-kube-api-access-7xx96" (OuterVolumeSpecName: "kube-api-access-7xx96") pod "c72424af-2a1c-404d-a94f-300507c646db" (UID: "c72424af-2a1c-404d-a94f-300507c646db"). InnerVolumeSpecName "kube-api-access-7xx96". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 05 06:13:14 addons-640282 kubelet[1272]: I1205 06:13:14.603573    1272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xx96\" (UniqueName: \"kubernetes.io/projected/c72424af-2a1c-404d-a94f-300507c646db-kube-api-access-7xx96\") on node \"addons-640282\" DevicePath \"\""
	Dec 05 06:13:15 addons-640282 kubelet[1272]: I1205 06:13:15.331780    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc76fbf5aceb6631eaaad91bb8da7a82682774d961f79860dc08b605f1ee306"
	Dec 05 06:13:15 addons-640282 kubelet[1272]: I1205 06:13:15.815106    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4gcw\" (UniqueName: \"kubernetes.io/projected/7ba4977e-d797-4844-8d1e-f04c991f354b-kube-api-access-s4gcw\") pod \"7ba4977e-d797-4844-8d1e-f04c991f354b\" (UID: \"7ba4977e-d797-4844-8d1e-f04c991f354b\") "
	Dec 05 06:13:15 addons-640282 kubelet[1272]: I1205 06:13:15.821456    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba4977e-d797-4844-8d1e-f04c991f354b-kube-api-access-s4gcw" (OuterVolumeSpecName: "kube-api-access-s4gcw") pod "7ba4977e-d797-4844-8d1e-f04c991f354b" (UID: "7ba4977e-d797-4844-8d1e-f04c991f354b"). InnerVolumeSpecName "kube-api-access-s4gcw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 05 06:13:15 addons-640282 kubelet[1272]: I1205 06:13:15.915771    1272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s4gcw\" (UniqueName: \"kubernetes.io/projected/7ba4977e-d797-4844-8d1e-f04c991f354b-kube-api-access-s4gcw\") on node \"addons-640282\" DevicePath \"\""
	Dec 05 06:13:16 addons-640282 kubelet[1272]: I1205 06:13:16.346134    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f791b250113c23974ef6fe24354e1ca6d49a05a99e2344eac8d9b5e0846d26ac"
	Dec 05 06:13:22 addons-640282 kubelet[1272]: I1205 06:13:22.276914    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-62wxv" podStartSLOduration=69.112528716 podStartE2EDuration="1m25.276895191s" podCreationTimestamp="2025-12-05 06:11:57 +0000 UTC" firstStartedPulling="2025-12-05 06:13:02.600229122 +0000 UTC m=+75.086624383" lastFinishedPulling="2025-12-05 06:13:18.76459558 +0000 UTC m=+91.250990858" observedRunningTime="2025-12-05 06:13:19.406627966 +0000 UTC m=+91.893023236" watchObservedRunningTime="2025-12-05 06:13:22.276895191 +0000 UTC m=+94.763290453"
	Dec 05 06:13:26 addons-640282 kubelet[1272]: I1205 06:13:26.448829    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-r8d6p" podStartSLOduration=69.095521256 podStartE2EDuration="1m28.448814548s" podCreationTimestamp="2025-12-05 06:11:58 +0000 UTC" firstStartedPulling="2025-12-05 06:13:06.386650744 +0000 UTC m=+78.873046006" lastFinishedPulling="2025-12-05 06:13:25.739944037 +0000 UTC m=+98.226339298" observedRunningTime="2025-12-05 06:13:26.448337588 +0000 UTC m=+98.934732850" watchObservedRunningTime="2025-12-05 06:13:26.448814548 +0000 UTC m=+98.935209810"
	Dec 05 06:13:28 addons-640282 kubelet[1272]: I1205 06:13:28.843027    1272 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 05 06:13:28 addons-640282 kubelet[1272]: I1205 06:13:28.843775    1272 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 05 06:13:31 addons-640282 kubelet[1272]: I1205 06:13:31.500307    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-dqw5d" podStartSLOduration=1.8746270040000002 podStartE2EDuration="57.500289915s" podCreationTimestamp="2025-12-05 06:12:34 +0000 UTC" firstStartedPulling="2025-12-05 06:12:35.224306103 +0000 UTC m=+47.710701365" lastFinishedPulling="2025-12-05 06:13:30.849969014 +0000 UTC m=+103.336364276" observedRunningTime="2025-12-05 06:13:31.49977576 +0000 UTC m=+103.986171030" watchObservedRunningTime="2025-12-05 06:13:31.500289915 +0000 UTC m=+103.986685185"
	Dec 05 06:13:39 addons-640282 kubelet[1272]: E1205 06:13:39.324702    1272 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 05 06:13:39 addons-640282 kubelet[1272]: E1205 06:13:39.324804    1272 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc52dd35-e9bf-4770-aa18-66aac8d15c08-gcr-creds podName:fc52dd35-e9bf-4770-aa18-66aac8d15c08 nodeName:}" failed. No retries permitted until 2025-12-05 06:14:43.3247838 +0000 UTC m=+175.811179078 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/fc52dd35-e9bf-4770-aa18-66aac8d15c08-gcr-creds") pod "registry-creds-764b6fb674-4zwkd" (UID: "fc52dd35-e9bf-4770-aa18-66aac8d15c08") : secret "registry-creds-gcr" not found
	Dec 05 06:13:39 addons-640282 kubelet[1272]: W1205 06:13:39.328619    1272 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b467876b75d61422c534daf39b5f8cee026c4e2c32ff4b277e05b3a0c9a3b005/crio-91ec0cd7cdf85d2e93db4896fdd8b4560c65ef8e9adbf5e2b909303b0f6deefa WatchSource:0}: Error finding container 91ec0cd7cdf85d2e93db4896fdd8b4560c65ef8e9adbf5e2b909303b0f6deefa: Status 404 returned error can't find the container with id 91ec0cd7cdf85d2e93db4896fdd8b4560c65ef8e9adbf5e2b909303b0f6deefa
	Dec 05 06:13:45 addons-640282 kubelet[1272]: I1205 06:13:45.086802    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-xdcct" podStartSLOduration=100.248145415 podStartE2EDuration="1m43.086781587s" podCreationTimestamp="2025-12-05 06:12:02 +0000 UTC" firstStartedPulling="2025-12-05 06:13:39.3358348 +0000 UTC m=+111.822230062" lastFinishedPulling="2025-12-05 06:13:42.174470972 +0000 UTC m=+114.660866234" observedRunningTime="2025-12-05 06:13:42.540114677 +0000 UTC m=+115.026509947" watchObservedRunningTime="2025-12-05 06:13:45.086781587 +0000 UTC m=+117.573176857"
	Dec 05 06:13:45 addons-640282 kubelet[1272]: I1205 06:13:45.379906    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnsd\" (UniqueName: \"kubernetes.io/projected/4c88078d-a560-4a11-ba24-8bb270c20468-kube-api-access-gfnsd\") pod \"busybox\" (UID: \"4c88078d-a560-4a11-ba24-8bb270c20468\") " pod="default/busybox"
	Dec 05 06:13:45 addons-640282 kubelet[1272]: I1205 06:13:45.379965    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4c88078d-a560-4a11-ba24-8bb270c20468-gcp-creds\") pod \"busybox\" (UID: \"4c88078d-a560-4a11-ba24-8bb270c20468\") " pod="default/busybox"
	Dec 05 06:13:45 addons-640282 kubelet[1272]: I1205 06:13:45.677857    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-jbbkj" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 06:13:45 addons-640282 kubelet[1272]: I1205 06:13:45.681294    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c72424af-2a1c-404d-a94f-300507c646db" path="/var/lib/kubelet/pods/c72424af-2a1c-404d-a94f-300507c646db/volumes"
	Dec 05 06:13:47 addons-640282 kubelet[1272]: I1205 06:13:47.654365    1272 scope.go:117] "RemoveContainer" containerID="3947cbf40b9ad541f29d5ed9aaa65645f4251a0b8e9f7d0daf12c4de613458b1"
	Dec 05 06:13:47 addons-640282 kubelet[1272]: I1205 06:13:47.688066    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba4977e-d797-4844-8d1e-f04c991f354b" path="/var/lib/kubelet/pods/7ba4977e-d797-4844-8d1e-f04c991f354b/volumes"
	Dec 05 06:13:47 addons-640282 kubelet[1272]: E1205 06:13:47.813749    1272 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-2fhtg_7ba4977e-d797-4844-8d1e-f04c991f354b/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-2fhtg_7ba4977e-d797-4844-8d1e-f04c991f354b/patch/1.log: no such file or directory
	Dec 05 06:13:47 addons-640282 kubelet[1272]: E1205 06:13:47.850315    1272 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/20a9fbd6b8006b5f2f0d70e87214f21c5337252d0511712a34f06e10b1ba3c81/diff" to get inode usage: stat /var/lib/containers/storage/overlay/20a9fbd6b8006b5f2f0d70e87214f21c5337252d0511712a34f06e10b1ba3c81/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108] <==
	W1205 06:13:33.768159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:35.771287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:35.777918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:37.781251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:37.788801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:39.793212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:39.798241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:41.803052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:41.808188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:43.811321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:43.815906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:45.819360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:45.823734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:47.827253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:47.831795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:49.835270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:49.839662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:51.843304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:51.847764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:53.852459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:53.857371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:55.865489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:55.874570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:57.877671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:13:57.882886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-640282 -n addons-640282
helpers_test.go:269: (dbg) Run:  kubectl --context addons-640282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx registry-creds-764b6fb674-4zwkd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-640282 describe pod ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx registry-creds-764b6fb674-4zwkd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-640282 describe pod ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx registry-creds-764b6fb674-4zwkd: exit status 1 (88.940633ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w7jhq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rl2nx" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-4zwkd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-640282 describe pod ingress-nginx-admission-create-w7jhq ingress-nginx-admission-patch-rl2nx registry-creds-764b6fb674-4zwkd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable headlamp --alsologtostderr -v=1: exit status 11 (276.848291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:13:59.279638  451719 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:13:59.280413  451719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:59.280428  451719 out.go:374] Setting ErrFile to fd 2...
	I1205 06:13:59.280435  451719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:13:59.280729  451719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:13:59.281057  451719 mustload.go:66] Loading cluster: addons-640282
	I1205 06:13:59.281569  451719 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:59.281590  451719 addons.go:622] checking whether the cluster is paused
	I1205 06:13:59.281781  451719 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:13:59.281799  451719 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:13:59.282609  451719 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:13:59.299284  451719 ssh_runner.go:195] Run: systemctl --version
	I1205 06:13:59.299350  451719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:13:59.321600  451719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:13:59.429137  451719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:13:59.429279  451719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:13:59.459147  451719 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:13:59.459178  451719 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:13:59.459183  451719 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:13:59.459191  451719 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:13:59.459219  451719 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:13:59.459230  451719 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:13:59.459233  451719 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:13:59.459236  451719 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:13:59.459240  451719 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:13:59.459245  451719 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:13:59.459254  451719 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:13:59.459257  451719 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:13:59.459260  451719 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:13:59.459263  451719 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:13:59.459266  451719 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:13:59.459271  451719 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:13:59.459302  451719 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:13:59.459308  451719 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:13:59.459311  451719 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:13:59.459315  451719 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:13:59.459321  451719 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:13:59.459329  451719 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:13:59.459332  451719 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:13:59.459336  451719 cri.go:89] found id: ""
	I1205 06:13:59.459411  451719 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:13:59.480104  451719 out.go:203] 
	W1205 06:13:59.484897  451719 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:13:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:13:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:13:59.484924  451719 out.go:285] * 
	* 
	W1205 06:13:59.491334  451719 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:13:59.495693  451719 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-4xzt7" [077b206b-a3d2-4599-8ecb-5aa24ba10c29] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00367885s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (280.099803ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:15:07.566115  453661 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:15:07.566886  453661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:15:07.566938  453661 out.go:374] Setting ErrFile to fd 2...
	I1205 06:15:07.566957  453661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:15:07.567256  453661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:15:07.567644  453661 mustload.go:66] Loading cluster: addons-640282
	I1205 06:15:07.568084  453661 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:15:07.568124  453661 addons.go:622] checking whether the cluster is paused
	I1205 06:15:07.568278  453661 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:15:07.568311  453661 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:15:07.568890  453661 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:15:07.590578  453661 ssh_runner.go:195] Run: systemctl --version
	I1205 06:15:07.590639  453661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:15:07.608819  453661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:15:07.717128  453661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:15:07.717223  453661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:15:07.748361  453661 cri.go:89] found id: "04d129d7e1de2c27c67a8a353fdea9212db947cc7cfa8a3e3addc4a1ddafecf3"
	I1205 06:15:07.748386  453661 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:15:07.748397  453661 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:15:07.748401  453661 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:15:07.748404  453661 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:15:07.748408  453661 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:15:07.748410  453661 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:15:07.748413  453661 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:15:07.748416  453661 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:15:07.748422  453661 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:15:07.748426  453661 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:15:07.748430  453661 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:15:07.748433  453661 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:15:07.748436  453661 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:15:07.748440  453661 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:15:07.748445  453661 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:15:07.748457  453661 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:15:07.748461  453661 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:15:07.748464  453661 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:15:07.748468  453661 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:15:07.748473  453661 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:15:07.748476  453661 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:15:07.748479  453661 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:15:07.748482  453661 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:15:07.748486  453661 cri.go:89] found id: ""
	I1205 06:15:07.748536  453661 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:15:07.765229  453661 out.go:203] 
	W1205 06:15:07.768308  453661 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:15:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:15:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:15:07.768338  453661 out.go:285] * 
	* 
	W1205 06:15:07.774708  453661 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:15:07.777668  453661 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.7s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-640282 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-640282 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640282 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [fe12cc48-bf20-489c-a531-aea416901327] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [fe12cc48-bf20-489c-a531-aea416901327] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [fe12cc48-bf20-489c-a531-aea416901327] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.025980172s
addons_test.go:967: (dbg) Run:  kubectl --context addons-640282 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 ssh "cat /opt/local-path-provisioner/pvc-dc2d0fe3-6c45-402d-afbe-77edefd5e6d2_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-640282 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-640282 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (299.789791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:15:01.247108  453543 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:15:01.247860  453543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:15:01.247874  453543 out.go:374] Setting ErrFile to fd 2...
	I1205 06:15:01.247879  453543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:15:01.248269  453543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:15:01.249485  453543 mustload.go:66] Loading cluster: addons-640282
	I1205 06:15:01.249977  453543 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:15:01.249993  453543 addons.go:622] checking whether the cluster is paused
	I1205 06:15:01.250147  453543 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:15:01.250160  453543 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:15:01.250797  453543 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:15:01.279987  453543 ssh_runner.go:195] Run: systemctl --version
	I1205 06:15:01.280046  453543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:15:01.300488  453543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:15:01.405402  453543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:15:01.405495  453543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:15:01.435624  453543 cri.go:89] found id: "04d129d7e1de2c27c67a8a353fdea9212db947cc7cfa8a3e3addc4a1ddafecf3"
	I1205 06:15:01.435655  453543 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:15:01.435660  453543 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:15:01.435665  453543 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:15:01.435668  453543 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:15:01.435672  453543 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:15:01.435675  453543 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:15:01.435678  453543 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:15:01.435682  453543 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:15:01.435692  453543 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:15:01.435696  453543 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:15:01.435700  453543 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:15:01.435704  453543 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:15:01.435707  453543 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:15:01.435710  453543 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:15:01.435722  453543 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:15:01.435729  453543 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:15:01.435733  453543 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:15:01.435737  453543 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:15:01.435740  453543 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:15:01.435745  453543 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:15:01.435748  453543 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:15:01.435751  453543 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:15:01.435755  453543 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:15:01.435758  453543 cri.go:89] found id: ""
	I1205 06:15:01.435821  453543 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:15:01.473929  453543 out.go:203] 
	W1205 06:15:01.477005  453543 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:15:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:15:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:15:01.477087  453543 out.go:285] * 
	* 
	W1205 06:15:01.483847  453543 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:15:01.488039  453543 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.70s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ft52z" [9aef4bec-ecc2-4c2e-98b0-84aa547b79e6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006225283s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (302.24729ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:45.276498  453080 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:45.277260  453080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:45.277274  453080 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:45.277280  453080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:45.277549  453080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:45.277902  453080 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:45.278305  453080 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:45.278325  453080 addons.go:622] checking whether the cluster is paused
	I1205 06:14:45.278473  453080 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:45.278494  453080 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:45.279049  453080 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:45.310179  453080 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:45.310241  453080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:45.332096  453080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:45.440899  453080 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:45.440997  453080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:45.483600  453080 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:45.483623  453080 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:45.483628  453080 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:45.483633  453080 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:45.483636  453080 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:45.483640  453080 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:45.483643  453080 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:45.483670  453080 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:45.483675  453080 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:45.483687  453080 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:45.483694  453080 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:45.483697  453080 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:45.483701  453080 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:45.483704  453080 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:45.483708  453080 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:45.483720  453080 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:45.483727  453080 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:45.483799  453080 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:45.483811  453080 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:45.483815  453080 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:45.483821  453080 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:45.483825  453080 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:45.483828  453080 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:45.483831  453080 cri.go:89] found id: ""
	I1205 06:14:45.483897  453080 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:45.499947  453080 out.go:203] 
	W1205 06:14:45.503222  453080 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:45.503250  453080 out.go:285] * 
	* 
	W1205 06:14:45.509647  453080 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:45.512778  453080 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-tgblp" [b532edb2-d087-493b-8409-324c219a8e5d] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003276504s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-640282 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-640282 addons disable yakd --alsologtostderr -v=1: exit status 11 (270.161178ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:14:51.569154  453145 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:14:51.571703  453145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:51.571725  453145 out.go:374] Setting ErrFile to fd 2...
	I1205 06:14:51.571732  453145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:14:51.572097  453145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:14:51.572442  453145 mustload.go:66] Loading cluster: addons-640282
	I1205 06:14:51.572872  453145 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:51.572893  453145 addons.go:622] checking whether the cluster is paused
	I1205 06:14:51.573026  453145 config.go:182] Loaded profile config "addons-640282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:14:51.573041  453145 host.go:66] Checking if "addons-640282" exists ...
	I1205 06:14:51.573646  453145 cli_runner.go:164] Run: docker container inspect addons-640282 --format={{.State.Status}}
	I1205 06:14:51.590943  453145 ssh_runner.go:195] Run: systemctl --version
	I1205 06:14:51.591021  453145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-640282
	I1205 06:14:51.608150  453145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/addons-640282/id_rsa Username:docker}
	I1205 06:14:51.714283  453145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:14:51.714420  453145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:14:51.756439  453145 cri.go:89] found id: "ae8fe59a87c4cac547bc84ca93e7e3c74189e2a1445d2a9c8b57201a989d61c2"
	I1205 06:14:51.756458  453145 cri.go:89] found id: "ee08f2df7a0e7f56e1e7da1430db783afa5b12859b29c4b826aa6be0c4310f84"
	I1205 06:14:51.756463  453145 cri.go:89] found id: "1343c4e249efabc939fdbf9eda1f854f67300b2c4006b9f12ef625dbf1622261"
	I1205 06:14:51.756466  453145 cri.go:89] found id: "576b9f44bab0ba04e6adf75b9c31a2e08a901552869e55b4c71f0e8874747ee2"
	I1205 06:14:51.756470  453145 cri.go:89] found id: "36207d2abda3a6fa99a7425309d9219d91c90bfece5e387c3026975757efad83"
	I1205 06:14:51.756473  453145 cri.go:89] found id: "8ca95d8216ff95e5a78898289609a5c82f657a0fd77d1e73d45946aec222afbd"
	I1205 06:14:51.756476  453145 cri.go:89] found id: "e05ecd19c0205084c6013857e84d16015a05c3319c74917b5bb8976fdb8932ef"
	I1205 06:14:51.756479  453145 cri.go:89] found id: "2c117baff7e0b4e63c326a8dbbdfd3389a9a2aa8b1f7cd559e02a160c986d69b"
	I1205 06:14:51.756482  453145 cri.go:89] found id: "1e9a9d06da060608fee3b68d4bc92dcc8671689134a16a427612571a1aadda44"
	I1205 06:14:51.756490  453145 cri.go:89] found id: "eb58125e4d2c78feab9622cae7875d00c6c6e394fe17a098bfd812ca3e2187c3"
	I1205 06:14:51.756493  453145 cri.go:89] found id: "249d0f3d91c825e0102712f81895dd88e7b69e43b8c2a89abcb560bd77d70dbb"
	I1205 06:14:51.756496  453145 cri.go:89] found id: "56309b0868051bb27bcffc29131f773b5fba7beeb88f1437d7d5a8c32e0ae92b"
	I1205 06:14:51.756500  453145 cri.go:89] found id: "513dee4bbb57b7e27432cad78b22015eba61566cbe6fdacf7f57da376ada5476"
	I1205 06:14:51.756503  453145 cri.go:89] found id: "75d36e5745352b8942263580d4020bbc554a8058a84d8f44d489261025381133"
	I1205 06:14:51.756506  453145 cri.go:89] found id: "18303e803325e0ef3e42b48c82523a731bb49d5f798c4368188d585f5e6e0d3d"
	I1205 06:14:51.756514  453145 cri.go:89] found id: "8f819a6511b2f4701363f12c7aa3fa4fb9c728aeae3c10d952706655b90e2108"
	I1205 06:14:51.756518  453145 cri.go:89] found id: "9e50a765cdd0ba77e1c41400bc47773e58bda2ef866c19e3cc2c1cf9c037ab84"
	I1205 06:14:51.756522  453145 cri.go:89] found id: "954b5a1cbede7815087d62f9f0e13658fb125b4bf1a0b2a5a2bfc83ce68bdebd"
	I1205 06:14:51.756525  453145 cri.go:89] found id: "afa775c377245524bac3f3b53e56994de7e03b04cb7dcd4c4e6ac97adf392d8b"
	I1205 06:14:51.756528  453145 cri.go:89] found id: "dbaf492de7d0d36ef69d07361b4a12c2172ad60c998d653852a7b56fadf88db3"
	I1205 06:14:51.756532  453145 cri.go:89] found id: "130424b6298d0ba2f2f2d975a1b8e4015951d60f4d4e0e2ee26fa6a669dd7438"
	I1205 06:14:51.756535  453145 cri.go:89] found id: "ce5973768e215a69db996295218f069ce16defde26378721f4c6340b48222693"
	I1205 06:14:51.756538  453145 cri.go:89] found id: "6a73bdffbbb7cc0b050e906e75fa7c0030229a7e1258150b249fe2618338889e"
	I1205 06:14:51.756541  453145 cri.go:89] found id: ""
	I1205 06:14:51.756592  453145 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 06:14:51.775379  453145 out.go:203] 
	W1205 06:14:51.778539  453145 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:14:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 06:14:51.778637  453145 out.go:285] * 
	* 
	W1205 06:14:51.785205  453145 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:14:51.788362  453145 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-640282 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-252233 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-252233 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cprsz" [2489e9e8-e3a8-40e9-b275-d19ed8eed261] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-cprsz" [2489e9e8-e3a8-40e9-b275-d19ed8eed261] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-252233 -n functional-252233
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-05 06:30:48.834146683 +0000 UTC m=+1188.902479502
functional_test.go:1645: (dbg) Run:  kubectl --context functional-252233 describe po hello-node-connect-7d85dfc575-cprsz -n default
functional_test.go:1645: (dbg) kubectl --context functional-252233 describe po hello-node-connect-7d85dfc575-cprsz -n default:
Name:             hello-node-connect-7d85dfc575-cprsz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-252233/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:20:48 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qw7nr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qw7nr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cprsz to functional-252233
Normal   Pulling    6m51s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m51s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m51s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-252233 logs hello-node-connect-7d85dfc575-cprsz -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-252233 logs hello-node-connect-7d85dfc575-cprsz -n default: exit status 1 (106.995776ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cprsz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-252233 logs hello-node-connect-7d85dfc575-cprsz -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-252233 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-cprsz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-252233/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:20:48 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qw7nr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qw7nr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cprsz to functional-252233
Normal   Pulling    6m52s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m52s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-252233 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-252233 logs -l app=hello-node-connect: exit status 1 (84.410655ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cprsz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-252233 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-252233 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.124.250
IPs:                      10.110.124.250
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30534/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-252233
helpers_test.go:243: (dbg) docker inspect functional-252233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9643e1c59f961cd7af3a310a562e07a48e4b3ccdeba8d0960e63090b81c8f042",
	        "Created": "2025-12-05T06:18:01.90081295Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:18:01.980294742Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/9643e1c59f961cd7af3a310a562e07a48e4b3ccdeba8d0960e63090b81c8f042/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9643e1c59f961cd7af3a310a562e07a48e4b3ccdeba8d0960e63090b81c8f042/hostname",
	        "HostsPath": "/var/lib/docker/containers/9643e1c59f961cd7af3a310a562e07a48e4b3ccdeba8d0960e63090b81c8f042/hosts",
	        "LogPath": "/var/lib/docker/containers/9643e1c59f961cd7af3a310a562e07a48e4b3ccdeba8d0960e63090b81c8f042/9643e1c59f961cd7af3a310a562e07a48e4b3ccdeba8d0960e63090b81c8f042-json.log",
	        "Name": "/functional-252233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-252233:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-252233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9643e1c59f961cd7af3a310a562e07a48e4b3ccdeba8d0960e63090b81c8f042",
	                "LowerDir": "/var/lib/docker/overlay2/9bd19422b1b83e4769c65b2bca9909e06c5a643df40b31cad7d9e99865a68d25-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bd19422b1b83e4769c65b2bca9909e06c5a643df40b31cad7d9e99865a68d25/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bd19422b1b83e4769c65b2bca9909e06c5a643df40b31cad7d9e99865a68d25/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bd19422b1b83e4769c65b2bca9909e06c5a643df40b31cad7d9e99865a68d25/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-252233",
	                "Source": "/var/lib/docker/volumes/functional-252233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-252233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-252233",
	                "name.minikube.sigs.k8s.io": "functional-252233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "959d53d7c134b48e416165e625d678f40b06f9f0774992e2df9424d90d336fe1",
	            "SandboxKey": "/var/run/docker/netns/959d53d7c134",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-252233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:b0:6c:a7:82:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "28e5032a93c339fc43a0d99dacd52bd584f76b2d8fdd1436764f2f1cc2da37e4",
	                    "EndpointID": "d570f0d558f920a6f94ad3b73dd950f89e57ecc2496072dddf7e982e6e9971b3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-252233",
	                        "9643e1c59f96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-252233 -n functional-252233
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 logs -n 25: (1.459451191s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-252233 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:19 UTC │ 05 Dec 25 06:19 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:19 UTC │ 05 Dec 25 06:19 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:19 UTC │ 05 Dec 25 06:19 UTC │
	│ kubectl │ functional-252233 kubectl -- --context functional-252233 get pods                                                          │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:19 UTC │ 05 Dec 25 06:19 UTC │
	│ start   │ -p functional-252233 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:19 UTC │ 05 Dec 25 06:20 UTC │
	│ service │ invalid-svc -p functional-252233                                                                                           │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ config  │ functional-252233 config unset cpus                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ cp      │ functional-252233 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ config  │ functional-252233 config get cpus                                                                                          │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ config  │ functional-252233 config set cpus 2                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ config  │ functional-252233 config get cpus                                                                                          │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ config  │ functional-252233 config unset cpus                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ ssh     │ functional-252233 ssh -n functional-252233 sudo cat /home/docker/cp-test.txt                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ config  │ functional-252233 config get cpus                                                                                          │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ ssh     │ functional-252233 ssh echo hello                                                                                           │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ cp      │ functional-252233 cp functional-252233:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2956756363/001/cp-test.txt │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ ssh     │ functional-252233 ssh cat /etc/hostname                                                                                    │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ ssh     │ functional-252233 ssh -n functional-252233 sudo cat /home/docker/cp-test.txt                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ tunnel  │ functional-252233 tunnel --alsologtostderr                                                                                 │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ tunnel  │ functional-252233 tunnel --alsologtostderr                                                                                 │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ cp      │ functional-252233 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ tunnel  │ functional-252233 tunnel --alsologtostderr                                                                                 │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │                     │
	│ ssh     │ functional-252233 ssh -n functional-252233 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ addons  │ functional-252233 addons list                                                                                              │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	│ addons  │ functional-252233 addons list -o json                                                                                      │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:20 UTC │ 05 Dec 25 06:20 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:19:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:19:54.368657  464214 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:19:54.368805  464214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:19:54.368809  464214 out.go:374] Setting ErrFile to fd 2...
	I1205 06:19:54.368813  464214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:19:54.369102  464214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:19:54.369580  464214 out.go:368] Setting JSON to false
	I1205 06:19:54.370646  464214 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10922,"bootTime":1764904673,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:19:54.370706  464214 start.go:143] virtualization:  
	I1205 06:19:54.374506  464214 out.go:179] * [functional-252233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:19:54.378470  464214 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:19:54.378547  464214 notify.go:221] Checking for updates...
	I1205 06:19:54.385306  464214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:19:54.388249  464214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:19:54.391298  464214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:19:54.394205  464214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:19:54.397199  464214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:19:54.400555  464214 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:19:54.400650  464214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:19:54.424863  464214 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:19:54.424965  464214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:19:54.489585  464214 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-05 06:19:54.479951823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:19:54.489682  464214 docker.go:319] overlay module found
	I1205 06:19:54.492915  464214 out.go:179] * Using the docker driver based on existing profile
	I1205 06:19:54.495712  464214 start.go:309] selected driver: docker
	I1205 06:19:54.495722  464214 start.go:927] validating driver "docker" against &{Name:functional-252233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-252233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:19:54.495811  464214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:19:54.495913  464214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:19:54.545823  464214 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-05 06:19:54.536431284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:19:54.546221  464214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:19:54.546244  464214 cni.go:84] Creating CNI manager for ""
	I1205 06:19:54.546295  464214 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:19:54.546335  464214 start.go:353] cluster config:
	{Name:functional-252233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-252233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:19:54.549503  464214 out.go:179] * Starting "functional-252233" primary control-plane node in "functional-252233" cluster
	I1205 06:19:54.552447  464214 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:19:54.555326  464214 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:19:54.558146  464214 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:19:54.558184  464214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1205 06:19:54.558194  464214 cache.go:65] Caching tarball of preloaded images
	I1205 06:19:54.558226  464214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:19:54.558283  464214 preload.go:238] Found /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 06:19:54.558292  464214 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 06:19:54.558428  464214 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/config.json ...
	I1205 06:19:54.576531  464214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:19:54.576541  464214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 06:19:54.576561  464214 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:19:54.576592  464214 start.go:360] acquireMachinesLock for functional-252233: {Name:mk34d3797282b3435eeb35650a98bdea9da72e76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:19:54.576657  464214 start.go:364] duration metric: took 49.273µs to acquireMachinesLock for "functional-252233"
	I1205 06:19:54.576684  464214 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:19:54.576688  464214 fix.go:54] fixHost starting: 
	I1205 06:19:54.576949  464214 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
	I1205 06:19:54.593779  464214 fix.go:112] recreateIfNeeded on functional-252233: state=Running err=<nil>
	W1205 06:19:54.593799  464214 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:19:54.596872  464214 out.go:252] * Updating the running docker "functional-252233" container ...
	I1205 06:19:54.596898  464214 machine.go:94] provisionDockerMachine start ...
	I1205 06:19:54.596984  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:19:54.614458  464214 main.go:143] libmachine: Using SSH client type: native
	I1205 06:19:54.614795  464214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1205 06:19:54.614801  464214 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:19:54.761838  464214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-252233
	
	I1205 06:19:54.761852  464214 ubuntu.go:182] provisioning hostname "functional-252233"
	I1205 06:19:54.761915  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:19:54.779682  464214 main.go:143] libmachine: Using SSH client type: native
	I1205 06:19:54.779981  464214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1205 06:19:54.779991  464214 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-252233 && echo "functional-252233" | sudo tee /etc/hostname
	I1205 06:19:54.939934  464214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-252233
	
	I1205 06:19:54.940025  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:19:54.958496  464214 main.go:143] libmachine: Using SSH client type: native
	I1205 06:19:54.958802  464214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1205 06:19:54.958815  464214 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-252233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-252233/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-252233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:19:55.110976  464214 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:19:55.110992  464214 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:19:55.111009  464214 ubuntu.go:190] setting up certificates
	I1205 06:19:55.111018  464214 provision.go:84] configureAuth start
	I1205 06:19:55.111087  464214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-252233
	I1205 06:19:55.130276  464214 provision.go:143] copyHostCerts
	I1205 06:19:55.130345  464214 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:19:55.130357  464214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:19:55.130456  464214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:19:55.130560  464214 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:19:55.130564  464214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:19:55.130589  464214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:19:55.130636  464214 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:19:55.130639  464214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:19:55.130661  464214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:19:55.130704  464214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-252233 san=[127.0.0.1 192.168.49.2 functional-252233 localhost minikube]
	I1205 06:19:55.211900  464214 provision.go:177] copyRemoteCerts
	I1205 06:19:55.211953  464214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:19:55.211998  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:19:55.237518  464214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:19:55.342267  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:19:55.359855  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:19:55.378027  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 06:19:55.396127  464214 provision.go:87] duration metric: took 285.097184ms to configureAuth
	I1205 06:19:55.396161  464214 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:19:55.396387  464214 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:19:55.396493  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:19:55.413879  464214 main.go:143] libmachine: Using SSH client type: native
	I1205 06:19:55.414193  464214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1205 06:19:55.414204  464214 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:20:00.865670  464214 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:20:00.865682  464214 machine.go:97] duration metric: took 6.268777562s to provisionDockerMachine
	I1205 06:20:00.865693  464214 start.go:293] postStartSetup for "functional-252233" (driver="docker")
	I1205 06:20:00.865720  464214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:20:00.865783  464214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:20:00.865827  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:20:00.891375  464214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:20:00.994496  464214 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:20:00.998002  464214 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:20:00.998021  464214 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:20:00.998030  464214 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:20:00.998086  464214 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:20:00.998160  464214 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:20:00.998241  464214 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:20:00.998286  464214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:20:01.006906  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:20:01.026179  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:20:01.044735  464214 start.go:296] duration metric: took 179.028032ms for postStartSetup
	I1205 06:20:01.044808  464214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:20:01.044854  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:20:01.068346  464214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:20:01.181514  464214 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:20:01.190318  464214 fix.go:56] duration metric: took 6.613622137s for fixHost
	I1205 06:20:01.190334  464214 start.go:83] releasing machines lock for "functional-252233", held for 6.613670268s
	I1205 06:20:01.190442  464214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-252233
	I1205 06:20:01.216099  464214 ssh_runner.go:195] Run: cat /version.json
	I1205 06:20:01.216151  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:20:01.216407  464214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:20:01.216453  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:20:01.238597  464214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:20:01.256018  464214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:20:01.350165  464214 ssh_runner.go:195] Run: systemctl --version
	I1205 06:20:01.440704  464214 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:20:01.477778  464214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:20:01.482324  464214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:20:01.482423  464214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:20:01.490778  464214 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:20:01.490792  464214 start.go:496] detecting cgroup driver to use...
	I1205 06:20:01.490824  464214 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:20:01.490871  464214 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:20:01.506660  464214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:20:01.519748  464214 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:20:01.519800  464214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:20:01.536082  464214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:20:01.549377  464214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:20:01.684585  464214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:20:01.825454  464214 docker.go:234] disabling docker service ...
	I1205 06:20:01.825527  464214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:20:01.842164  464214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:20:01.855688  464214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:20:02.011348  464214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:20:02.156911  464214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:20:02.170661  464214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:20:02.185286  464214 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:20:02.185362  464214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:20:02.194758  464214 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:20:02.194818  464214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:20:02.203945  464214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:20:02.213701  464214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:20:02.223074  464214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:20:02.232389  464214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:20:02.241494  464214 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:20:02.249983  464214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:20:02.258991  464214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:20:02.266897  464214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:20:02.274610  464214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:20:02.412679  464214 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:20:06.368885  464214 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.956181811s)
	I1205 06:20:06.368902  464214 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:20:06.368955  464214 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:20:06.373261  464214 start.go:564] Will wait 60s for crictl version
	I1205 06:20:06.373322  464214 ssh_runner.go:195] Run: which crictl
	I1205 06:20:06.376953  464214 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:20:06.401684  464214 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:20:06.401757  464214 ssh_runner.go:195] Run: crio --version
	I1205 06:20:06.438437  464214 ssh_runner.go:195] Run: crio --version
	I1205 06:20:06.469410  464214 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 06:20:06.472302  464214 cli_runner.go:164] Run: docker network inspect functional-252233 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:20:06.488804  464214 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:20:06.496224  464214 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:20:06.499077  464214 kubeadm.go:884] updating cluster {Name:functional-252233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-252233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:20:06.499214  464214 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 06:20:06.499285  464214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:20:06.540679  464214 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:20:06.540690  464214 crio.go:433] Images already preloaded, skipping extraction
	I1205 06:20:06.540749  464214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:20:06.566843  464214 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:20:06.566856  464214 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:20:06.566862  464214 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.2 crio true true} ...
	I1205 06:20:06.566958  464214 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-252233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-252233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:20:06.567041  464214 ssh_runner.go:195] Run: crio config
	I1205 06:20:06.640258  464214 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:20:06.640293  464214 cni.go:84] Creating CNI manager for ""
	I1205 06:20:06.640307  464214 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:20:06.640326  464214 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:20:06.640358  464214 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-252233 NodeName:functional-252233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:20:06.640480  464214 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-252233"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:20:06.640564  464214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 06:20:06.648845  464214 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:20:06.648906  464214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:20:06.656778  464214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1205 06:20:06.676813  464214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 06:20:06.690333  464214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1205 06:20:06.703448  464214 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:20:06.707223  464214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:20:06.867499  464214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:20:06.881993  464214 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233 for IP: 192.168.49.2
	I1205 06:20:06.882004  464214 certs.go:195] generating shared ca certs ...
	I1205 06:20:06.882019  464214 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:20:06.882194  464214 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:20:06.882248  464214 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:20:06.882254  464214 certs.go:257] generating profile certs ...
	I1205 06:20:06.882343  464214 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.key
	I1205 06:20:06.882423  464214 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/apiserver.key.6f33f0e6
	I1205 06:20:06.882482  464214 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/proxy-client.key
	I1205 06:20:06.882614  464214 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:20:06.882651  464214 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:20:06.882658  464214 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:20:06.882695  464214 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:20:06.882718  464214 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:20:06.882748  464214 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:20:06.882802  464214 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:20:06.883521  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:20:06.905250  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:20:06.924778  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:20:06.943955  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:20:06.962021  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:20:06.980584  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 06:20:06.999519  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:20:07.018938  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 06:20:07.037459  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:20:07.054979  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:20:07.072663  464214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:20:07.090052  464214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:20:07.102584  464214 ssh_runner.go:195] Run: openssl version
	I1205 06:20:07.109058  464214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:20:07.116576  464214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:20:07.124060  464214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:20:07.127900  464214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:20:07.127958  464214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:20:07.170682  464214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:20:07.178200  464214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:20:07.185670  464214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:20:07.193551  464214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:20:07.197478  464214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:17 /usr/share/ca-certificates/444147.pem
	I1205 06:20:07.197534  464214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:20:07.240790  464214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:20:07.248423  464214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:20:07.255821  464214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:20:07.263453  464214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:20:07.267332  464214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:17 /usr/share/ca-certificates/4441472.pem
	I1205 06:20:07.267390  464214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:20:07.308619  464214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:20:07.318553  464214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:20:07.322576  464214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:20:07.364536  464214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:20:07.405536  464214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:20:07.446779  464214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:20:07.488457  464214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:20:07.529382  464214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:20:07.570591  464214 kubeadm.go:401] StartCluster: {Name:functional-252233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-252233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:20:07.570671  464214 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:20:07.570747  464214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:20:07.600104  464214 cri.go:89] found id: "bad15b0196dd6c86b5b5ed1d4b25c0cbe7cceb55af916d90672c96676d8ac439"
	I1205 06:20:07.600116  464214 cri.go:89] found id: "74d658d21b8f8d10a9787dc4cd9702840c823b1685bbb1e46af5681e986ca49f"
	I1205 06:20:07.600120  464214 cri.go:89] found id: "1297638fcf21c376c6a3864bf22845b59b693449b9f1e442362cb0189253bdbb"
	I1205 06:20:07.600122  464214 cri.go:89] found id: "9ae7d183699a66834ac009376059f5d35cdee9e2df17d044c049c387ba34f33c"
	I1205 06:20:07.600125  464214 cri.go:89] found id: "97be6ab9890b3cd02177a91066ce14109cd98cf02bda456588bbcc7e5223d3d2"
	I1205 06:20:07.600129  464214 cri.go:89] found id: "7da0670c510348a64f970eea198eec9aa72012a729a38f7d277ad109caafbce2"
	I1205 06:20:07.600131  464214 cri.go:89] found id: "4fd9b487e136749ab5bc21e474c5c58413d525e10416b781338a72efb5c769da"
	I1205 06:20:07.600134  464214 cri.go:89] found id: "28e19109a51eaeb8461a3ad3ed1735bdc6ad9a797bb10d8540a4920450885c5c"
	I1205 06:20:07.600136  464214 cri.go:89] found id: ""
	I1205 06:20:07.600199  464214 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 06:20:07.611389  464214 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:20:07Z" level=error msg="open /run/runc: no such file or directory"
	I1205 06:20:07.611472  464214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:20:07.619860  464214 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:20:07.619870  464214 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:20:07.619932  464214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:20:07.627410  464214 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:20:07.627931  464214 kubeconfig.go:125] found "functional-252233" server: "https://192.168.49.2:8441"
	I1205 06:20:07.629272  464214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:20:07.637135  464214 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:18:11.295495999 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:20:06.696795660 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:20:07.637146  464214 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:20:07.637156  464214 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 06:20:07.637227  464214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:20:07.664506  464214 cri.go:89] found id: "bad15b0196dd6c86b5b5ed1d4b25c0cbe7cceb55af916d90672c96676d8ac439"
	I1205 06:20:07.664518  464214 cri.go:89] found id: "74d658d21b8f8d10a9787dc4cd9702840c823b1685bbb1e46af5681e986ca49f"
	I1205 06:20:07.664522  464214 cri.go:89] found id: "1297638fcf21c376c6a3864bf22845b59b693449b9f1e442362cb0189253bdbb"
	I1205 06:20:07.664524  464214 cri.go:89] found id: "9ae7d183699a66834ac009376059f5d35cdee9e2df17d044c049c387ba34f33c"
	I1205 06:20:07.664527  464214 cri.go:89] found id: "97be6ab9890b3cd02177a91066ce14109cd98cf02bda456588bbcc7e5223d3d2"
	I1205 06:20:07.664530  464214 cri.go:89] found id: "7da0670c510348a64f970eea198eec9aa72012a729a38f7d277ad109caafbce2"
	I1205 06:20:07.664532  464214 cri.go:89] found id: "4fd9b487e136749ab5bc21e474c5c58413d525e10416b781338a72efb5c769da"
	I1205 06:20:07.664534  464214 cri.go:89] found id: "28e19109a51eaeb8461a3ad3ed1735bdc6ad9a797bb10d8540a4920450885c5c"
	I1205 06:20:07.664537  464214 cri.go:89] found id: ""
	I1205 06:20:07.664541  464214 cri.go:252] Stopping containers: [bad15b0196dd6c86b5b5ed1d4b25c0cbe7cceb55af916d90672c96676d8ac439 74d658d21b8f8d10a9787dc4cd9702840c823b1685bbb1e46af5681e986ca49f 1297638fcf21c376c6a3864bf22845b59b693449b9f1e442362cb0189253bdbb 9ae7d183699a66834ac009376059f5d35cdee9e2df17d044c049c387ba34f33c 97be6ab9890b3cd02177a91066ce14109cd98cf02bda456588bbcc7e5223d3d2 7da0670c510348a64f970eea198eec9aa72012a729a38f7d277ad109caafbce2 4fd9b487e136749ab5bc21e474c5c58413d525e10416b781338a72efb5c769da 28e19109a51eaeb8461a3ad3ed1735bdc6ad9a797bb10d8540a4920450885c5c]
	I1205 06:20:07.664597  464214 ssh_runner.go:195] Run: which crictl
	I1205 06:20:07.668307  464214 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 bad15b0196dd6c86b5b5ed1d4b25c0cbe7cceb55af916d90672c96676d8ac439 74d658d21b8f8d10a9787dc4cd9702840c823b1685bbb1e46af5681e986ca49f 1297638fcf21c376c6a3864bf22845b59b693449b9f1e442362cb0189253bdbb 9ae7d183699a66834ac009376059f5d35cdee9e2df17d044c049c387ba34f33c 97be6ab9890b3cd02177a91066ce14109cd98cf02bda456588bbcc7e5223d3d2 7da0670c510348a64f970eea198eec9aa72012a729a38f7d277ad109caafbce2 4fd9b487e136749ab5bc21e474c5c58413d525e10416b781338a72efb5c769da 28e19109a51eaeb8461a3ad3ed1735bdc6ad9a797bb10d8540a4920450885c5c
	I1205 06:20:07.729409  464214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:20:07.845832  464214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:20:07.853843  464214 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  5 06:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  5 06:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Dec  5 06:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  5 06:18 /etc/kubernetes/scheduler.conf
	
	I1205 06:20:07.853904  464214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:20:07.862088  464214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:20:07.870171  464214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:20:07.870228  464214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:20:07.877903  464214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:20:07.886035  464214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:20:07.886094  464214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:20:07.893892  464214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:20:07.901702  464214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:20:07.901759  464214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:20:07.909492  464214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:20:07.917806  464214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:20:07.968457  464214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:20:10.797162  464214 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.828678738s)
	I1205 06:20:10.797226  464214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:20:11.039589  464214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:20:11.112051  464214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:20:11.173402  464214 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:20:11.173480  464214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:20:11.673988  464214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:20:12.174494  464214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:20:12.190104  464214 api_server.go:72] duration metric: took 1.016701595s to wait for apiserver process to appear ...
	I1205 06:20:12.190118  464214 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:20:12.190135  464214 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1205 06:20:16.235615  464214 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 06:20:16.235635  464214 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 06:20:16.235647  464214 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1205 06:20:16.450568  464214 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 06:20:16.450588  464214 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 06:20:16.690965  464214 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1205 06:20:16.699245  464214 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 06:20:16.699258  464214 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 06:20:17.190910  464214 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1205 06:20:17.212409  464214 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 06:20:17.212430  464214 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 06:20:17.691138  464214 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1205 06:20:17.700319  464214 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1205 06:20:17.714884  464214 api_server.go:141] control plane version: v1.34.2
	I1205 06:20:17.714900  464214 api_server.go:131] duration metric: took 5.524777151s to wait for apiserver health ...
	I1205 06:20:17.714911  464214 cni.go:84] Creating CNI manager for ""
	I1205 06:20:17.714916  464214 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:20:17.718773  464214 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1205 06:20:17.721896  464214 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 06:20:17.726631  464214 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1205 06:20:17.726642  464214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 06:20:17.747382  464214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 06:20:18.335251  464214 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:20:18.343178  464214 system_pods.go:59] 8 kube-system pods found
	I1205 06:20:18.343215  464214 system_pods.go:61] "coredns-66bc5c9577-cnv7z" [c7c68d20-b20a-4c9b-8ffa-e7eb71025bfe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:20:18.343223  464214 system_pods.go:61] "etcd-functional-252233" [dcfc3b17-b7dc-4bf5-a970-a9ba1f3f121c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:20:18.343229  464214 system_pods.go:61] "kindnet-n6cpg" [ddf48780-0be1-4812-a6de-2d25e90a6e85] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 06:20:18.343235  464214 system_pods.go:61] "kube-apiserver-functional-252233" [d45384d3-8fba-40ad-ad50-096f862bc0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:20:18.343242  464214 system_pods.go:61] "kube-controller-manager-functional-252233" [ddfead3b-77ef-401b-b323-7e2c3f3049b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 06:20:18.343249  464214 system_pods.go:61] "kube-proxy-mc7ft" [6c4220ff-5dc0-4527-9571-7036c4a7cfec] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 06:20:18.343254  464214 system_pods.go:61] "kube-scheduler-functional-252233" [08f43a58-5a8d-4181-a62d-64e7a4badeef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 06:20:18.343257  464214 system_pods.go:61] "storage-provisioner" [5a7152d4-54c8-4e57-b889-d123abc92eed] Running
	I1205 06:20:18.343263  464214 system_pods.go:74] duration metric: took 8.000863ms to wait for pod list to return data ...
	I1205 06:20:18.343277  464214 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:20:18.346617  464214 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 06:20:18.346643  464214 node_conditions.go:123] node cpu capacity is 2
	I1205 06:20:18.346655  464214 node_conditions.go:105] duration metric: took 3.37503ms to run NodePressure ...
	I1205 06:20:18.346715  464214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:20:18.607488  464214 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1205 06:20:18.610827  464214 kubeadm.go:744] kubelet initialised
	I1205 06:20:18.610838  464214 kubeadm.go:745] duration metric: took 3.329228ms waiting for restarted kubelet to initialise ...
	I1205 06:20:18.610854  464214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 06:20:18.620809  464214 ops.go:34] apiserver oom_adj: -16
	I1205 06:20:18.620821  464214 kubeadm.go:602] duration metric: took 11.000946917s to restartPrimaryControlPlane
	I1205 06:20:18.620830  464214 kubeadm.go:403] duration metric: took 11.050250141s to StartCluster
	I1205 06:20:18.620845  464214 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:20:18.620910  464214 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:20:18.621529  464214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:20:18.621742  464214 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:20:18.621995  464214 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:20:18.622031  464214 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:20:18.622108  464214 addons.go:70] Setting default-storageclass=true in profile "functional-252233"
	I1205 06:20:18.622119  464214 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-252233"
	I1205 06:20:18.622472  464214 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
	I1205 06:20:18.622628  464214 addons.go:70] Setting storage-provisioner=true in profile "functional-252233"
	I1205 06:20:18.622644  464214 addons.go:239] Setting addon storage-provisioner=true in "functional-252233"
	W1205 06:20:18.622649  464214 addons.go:248] addon storage-provisioner should already be in state true
	I1205 06:20:18.622669  464214 host.go:66] Checking if "functional-252233" exists ...
	I1205 06:20:18.623253  464214 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
	I1205 06:20:18.627095  464214 out.go:179] * Verifying Kubernetes components...
	I1205 06:20:18.630101  464214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:20:18.646921  464214 addons.go:239] Setting addon default-storageclass=true in "functional-252233"
	W1205 06:20:18.646931  464214 addons.go:248] addon default-storageclass should already be in state true
	I1205 06:20:18.646954  464214 host.go:66] Checking if "functional-252233" exists ...
	I1205 06:20:18.647375  464214 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
	I1205 06:20:18.653808  464214 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:20:18.656687  464214 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:20:18.656698  464214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:20:18.656765  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:20:18.687777  464214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:20:18.694818  464214 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:20:18.694830  464214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:20:18.694894  464214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:20:18.725026  464214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:20:18.846468  464214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:20:18.861616  464214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:20:18.894699  464214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:20:19.775840  464214 node_ready.go:35] waiting up to 6m0s for node "functional-252233" to be "Ready" ...
	I1205 06:20:19.779214  464214 node_ready.go:49] node "functional-252233" is "Ready"
	I1205 06:20:19.779231  464214 node_ready.go:38] duration metric: took 3.368679ms for node "functional-252233" to be "Ready" ...
	I1205 06:20:19.779242  464214 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:20:19.779304  464214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:20:19.786790  464214 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1205 06:20:19.790539  464214 addons.go:530] duration metric: took 1.16849643s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 06:20:19.792455  464214 api_server.go:72] duration metric: took 1.170690604s to wait for apiserver process to appear ...
	I1205 06:20:19.792467  464214 api_server.go:88] waiting for apiserver healthz status ...
	I1205 06:20:19.792484  464214 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1205 06:20:19.801463  464214 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1205 06:20:19.802486  464214 api_server.go:141] control plane version: v1.34.2
	I1205 06:20:19.802499  464214 api_server.go:131] duration metric: took 10.026847ms to wait for apiserver health ...
	I1205 06:20:19.802506  464214 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 06:20:19.805915  464214 system_pods.go:59] 8 kube-system pods found
	I1205 06:20:19.805933  464214 system_pods.go:61] "coredns-66bc5c9577-cnv7z" [c7c68d20-b20a-4c9b-8ffa-e7eb71025bfe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:20:19.805941  464214 system_pods.go:61] "etcd-functional-252233" [dcfc3b17-b7dc-4bf5-a970-a9ba1f3f121c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:20:19.805945  464214 system_pods.go:61] "kindnet-n6cpg" [ddf48780-0be1-4812-a6de-2d25e90a6e85] Running
	I1205 06:20:19.805951  464214 system_pods.go:61] "kube-apiserver-functional-252233" [d45384d3-8fba-40ad-ad50-096f862bc0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:20:19.805956  464214 system_pods.go:61] "kube-controller-manager-functional-252233" [ddfead3b-77ef-401b-b323-7e2c3f3049b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 06:20:19.805959  464214 system_pods.go:61] "kube-proxy-mc7ft" [6c4220ff-5dc0-4527-9571-7036c4a7cfec] Running
	I1205 06:20:19.805964  464214 system_pods.go:61] "kube-scheduler-functional-252233" [08f43a58-5a8d-4181-a62d-64e7a4badeef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 06:20:19.805967  464214 system_pods.go:61] "storage-provisioner" [5a7152d4-54c8-4e57-b889-d123abc92eed] Running
	I1205 06:20:19.805970  464214 system_pods.go:74] duration metric: took 3.460134ms to wait for pod list to return data ...
	I1205 06:20:19.805976  464214 default_sa.go:34] waiting for default service account to be created ...
	I1205 06:20:19.808583  464214 default_sa.go:45] found service account: "default"
	I1205 06:20:19.808595  464214 default_sa.go:55] duration metric: took 2.614496ms for default service account to be created ...
	I1205 06:20:19.808603  464214 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 06:20:19.811752  464214 system_pods.go:86] 8 kube-system pods found
	I1205 06:20:19.811771  464214 system_pods.go:89] "coredns-66bc5c9577-cnv7z" [c7c68d20-b20a-4c9b-8ffa-e7eb71025bfe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 06:20:19.811777  464214 system_pods.go:89] "etcd-functional-252233" [dcfc3b17-b7dc-4bf5-a970-a9ba1f3f121c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 06:20:19.811781  464214 system_pods.go:89] "kindnet-n6cpg" [ddf48780-0be1-4812-a6de-2d25e90a6e85] Running
	I1205 06:20:19.811787  464214 system_pods.go:89] "kube-apiserver-functional-252233" [d45384d3-8fba-40ad-ad50-096f862bc0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 06:20:19.811792  464214 system_pods.go:89] "kube-controller-manager-functional-252233" [ddfead3b-77ef-401b-b323-7e2c3f3049b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 06:20:19.811795  464214 system_pods.go:89] "kube-proxy-mc7ft" [6c4220ff-5dc0-4527-9571-7036c4a7cfec] Running
	I1205 06:20:19.811800  464214 system_pods.go:89] "kube-scheduler-functional-252233" [08f43a58-5a8d-4181-a62d-64e7a4badeef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 06:20:19.811802  464214 system_pods.go:89] "storage-provisioner" [5a7152d4-54c8-4e57-b889-d123abc92eed] Running
	I1205 06:20:19.811809  464214 system_pods.go:126] duration metric: took 3.200907ms to wait for k8s-apps to be running ...
	I1205 06:20:19.811820  464214 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 06:20:19.811880  464214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:20:19.825045  464214 system_svc.go:56] duration metric: took 13.219383ms WaitForService to wait for kubelet
	I1205 06:20:19.825063  464214 kubeadm.go:587] duration metric: took 1.203301775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:20:19.825080  464214 node_conditions.go:102] verifying NodePressure condition ...
	I1205 06:20:19.827921  464214 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 06:20:19.827936  464214 node_conditions.go:123] node cpu capacity is 2
	I1205 06:20:19.827944  464214 node_conditions.go:105] duration metric: took 2.86039ms to run NodePressure ...
	I1205 06:20:19.827956  464214 start.go:242] waiting for startup goroutines ...
	I1205 06:20:19.827962  464214 start.go:247] waiting for cluster config update ...
	I1205 06:20:19.827971  464214 start.go:256] writing updated cluster config ...
	I1205 06:20:19.828258  464214 ssh_runner.go:195] Run: rm -f paused
	I1205 06:20:19.832082  464214 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:20:19.835695  464214 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cnv7z" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 06:20:21.841669  464214 pod_ready.go:104] pod "coredns-66bc5c9577-cnv7z" is not "Ready", error: <nil>
	I1205 06:20:24.341816  464214 pod_ready.go:94] pod "coredns-66bc5c9577-cnv7z" is "Ready"
	I1205 06:20:24.341830  464214 pod_ready.go:86] duration metric: took 4.506122209s for pod "coredns-66bc5c9577-cnv7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:24.348347  464214 pod_ready.go:83] waiting for pod "etcd-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:24.353319  464214 pod_ready.go:94] pod "etcd-functional-252233" is "Ready"
	I1205 06:20:24.353334  464214 pod_ready.go:86] duration metric: took 4.973308ms for pod "etcd-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:24.355918  464214 pod_ready.go:83] waiting for pod "kube-apiserver-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 06:20:26.361868  464214 pod_ready.go:104] pod "kube-apiserver-functional-252233" is not "Ready", error: <nil>
	W1205 06:20:28.362271  464214 pod_ready.go:104] pod "kube-apiserver-functional-252233" is not "Ready", error: <nil>
	I1205 06:20:29.861990  464214 pod_ready.go:94] pod "kube-apiserver-functional-252233" is "Ready"
	I1205 06:20:29.862009  464214 pod_ready.go:86] duration metric: took 5.506072564s for pod "kube-apiserver-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:29.864381  464214 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:29.869322  464214 pod_ready.go:94] pod "kube-controller-manager-functional-252233" is "Ready"
	I1205 06:20:29.869353  464214 pod_ready.go:86] duration metric: took 4.959245ms for pod "kube-controller-manager-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:29.872069  464214 pod_ready.go:83] waiting for pod "kube-proxy-mc7ft" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:29.877757  464214 pod_ready.go:94] pod "kube-proxy-mc7ft" is "Ready"
	I1205 06:20:29.877774  464214 pod_ready.go:86] duration metric: took 5.69033ms for pod "kube-proxy-mc7ft" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:29.880596  464214 pod_ready.go:83] waiting for pod "kube-scheduler-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:30.140589  464214 pod_ready.go:94] pod "kube-scheduler-functional-252233" is "Ready"
	I1205 06:20:30.140606  464214 pod_ready.go:86] duration metric: took 259.995371ms for pod "kube-scheduler-functional-252233" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 06:20:30.140618  464214 pod_ready.go:40] duration metric: took 10.308514916s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 06:20:30.199753  464214 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1205 06:20:30.203020  464214 out.go:179] * Done! kubectl is now configured to use "functional-252233" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 06:21:06 functional-252233 crio[3563]: time="2025-12-05T06:21:06.175362903Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-c8mht Namespace:default ID:5292039dc3e01ef2faa65378a3adeca111c625109bc967709efb5e4675667776 UID:0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4 NetNS:/var/run/netns/31517a10-80cc-42ba-8bdf-f12abb50890a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000674d40}] Aliases:map[]}"
	Dec 05 06:21:06 functional-252233 crio[3563]: time="2025-12-05T06:21:06.175712724Z" level=info msg="Checking pod default_hello-node-75c85bcc94-c8mht for CNI network kindnet (type=ptp)"
	Dec 05 06:21:06 functional-252233 crio[3563]: time="2025-12-05T06:21:06.178608175Z" level=info msg="Ran pod sandbox 5292039dc3e01ef2faa65378a3adeca111c625109bc967709efb5e4675667776 with infra container: default/hello-node-75c85bcc94-c8mht/POD" id=d4cf4a73-3c24-4f12-9e81-3d5165dcfd66 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 06:21:06 functional-252233 crio[3563]: time="2025-12-05T06:21:06.182263905Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6a614dd4-2593-4fc4-add1-b1fbf3d77a48 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.202722838Z" level=info msg="Stopping pod sandbox: fa74168e7e42bf6a7b295944162ec432728439e244437356d62cda18b1e17728" id=f473419a-dba9-4091-b39d-326029bd9f6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.202781325Z" level=info msg="Stopped pod sandbox (already stopped): fa74168e7e42bf6a7b295944162ec432728439e244437356d62cda18b1e17728" id=f473419a-dba9-4091-b39d-326029bd9f6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.203420102Z" level=info msg="Removing pod sandbox: fa74168e7e42bf6a7b295944162ec432728439e244437356d62cda18b1e17728" id=064670a5-d1aa-4ff8-97fb-a46fc152bac5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.207182475Z" level=info msg="Removed pod sandbox: fa74168e7e42bf6a7b295944162ec432728439e244437356d62cda18b1e17728" id=064670a5-d1aa-4ff8-97fb-a46fc152bac5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.20773385Z" level=info msg="Stopping pod sandbox: 3f800b9bb116f28a3c73162582a7c52bf1da59cc9021e9af13f6d528aabb8358" id=143e8c45-3a7a-4969-a7e5-d7cd938a0b38 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.207786848Z" level=info msg="Stopped pod sandbox (already stopped): 3f800b9bb116f28a3c73162582a7c52bf1da59cc9021e9af13f6d528aabb8358" id=143e8c45-3a7a-4969-a7e5-d7cd938a0b38 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.208148009Z" level=info msg="Removing pod sandbox: 3f800b9bb116f28a3c73162582a7c52bf1da59cc9021e9af13f6d528aabb8358" id=4ceabf0a-5532-4210-bc17-f5e6a5e9a935 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.211687463Z" level=info msg="Removed pod sandbox: 3f800b9bb116f28a3c73162582a7c52bf1da59cc9021e9af13f6d528aabb8358" id=4ceabf0a-5532-4210-bc17-f5e6a5e9a935 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.212216519Z" level=info msg="Stopping pod sandbox: c3cb33eac552d7dab1198e97b81ad89530fee06145f12b32c21c4c80b7b240d2" id=de13ec66-e279-41e7-a297-3f770eb9e846 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.212263699Z" level=info msg="Stopped pod sandbox (already stopped): c3cb33eac552d7dab1198e97b81ad89530fee06145f12b32c21c4c80b7b240d2" id=de13ec66-e279-41e7-a297-3f770eb9e846 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.212627773Z" level=info msg="Removing pod sandbox: c3cb33eac552d7dab1198e97b81ad89530fee06145f12b32c21c4c80b7b240d2" id=101f582b-b09e-4697-b1d1-a21e67f39cec name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:21:11 functional-252233 crio[3563]: time="2025-12-05T06:21:11.216088424Z" level=info msg="Removed pod sandbox: c3cb33eac552d7dab1198e97b81ad89530fee06145f12b32c21c4c80b7b240d2" id=101f582b-b09e-4697-b1d1-a21e67f39cec name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 06:21:21 functional-252233 crio[3563]: time="2025-12-05T06:21:21.185037212Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9d517f49-2380-464e-a6ab-80346c6daf0a name=/runtime.v1.ImageService/PullImage
	Dec 05 06:21:32 functional-252233 crio[3563]: time="2025-12-05T06:21:32.185031928Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d29be0e0-971f-49ca-a6fd-cf80a68b035e name=/runtime.v1.ImageService/PullImage
	Dec 05 06:21:46 functional-252233 crio[3563]: time="2025-12-05T06:21:46.184859626Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=81bd184f-cfe1-4f2b-bc4d-3267b0c3e112 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:22:26 functional-252233 crio[3563]: time="2025-12-05T06:22:26.184936036Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e019f7ec-032c-42d4-a223-c766fb4daf9f name=/runtime.v1.ImageService/PullImage
	Dec 05 06:22:28 functional-252233 crio[3563]: time="2025-12-05T06:22:28.185178034Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3200329d-72f9-4132-9069-9459638c75f6 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:23:54 functional-252233 crio[3563]: time="2025-12-05T06:23:54.184934032Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6bab27ce-f1c1-4064-b1d1-5cecbd926ba4 name=/runtime.v1.ImageService/PullImage
	Dec 05 06:23:57 functional-252233 crio[3563]: time="2025-12-05T06:23:57.185499861Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5fad6f72-73d4-42c9-a708-a7a0973f12fc name=/runtime.v1.ImageService/PullImage
	Dec 05 06:26:42 functional-252233 crio[3563]: time="2025-12-05T06:26:42.184398065Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b667daf6-cc1a-4723-bd8f-8fde836502af name=/runtime.v1.ImageService/PullImage
	Dec 05 06:26:48 functional-252233 crio[3563]: time="2025-12-05T06:26:48.184571642Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0e2455dc-daee-4901-a71a-a4a9a37cea1b name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d0ee5f1704289       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   9fc84b97468b7       sp-pod                                      default
	79a5e38a87398       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   21f2aeee01362       nginx-svc                                   default
	3f937f4c796f9       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  10 minutes ago      Running             kube-proxy                2                   b8b3a0ea848e3       kube-proxy-mc7ft                            kube-system
	03b13a88094f0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   5ac67ebd7484d       coredns-66bc5c9577-cnv7z                    kube-system
	ef28bc09aa5cf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   61259fc4cdcf0       storage-provisioner                         kube-system
	550cc9aab76cc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   5c7fa243969ff       kindnet-n6cpg                               kube-system
	8a7fe69e1988d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                  10 minutes ago      Running             kube-apiserver            0                   ae6634e85d67d       kube-apiserver-functional-252233            kube-system
	b22c489f0e3fa       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  10 minutes ago      Running             kube-scheduler            2                   ee02d4a7e9cfa       kube-scheduler-functional-252233            kube-system
	ddf23714abcb5       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  10 minutes ago      Running             kube-controller-manager   2                   0f3fa1349ea44       kube-controller-manager-functional-252233   kube-system
	6e4b2ff985c8c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  10 minutes ago      Running             etcd                      2                   b7f3559c854ea       etcd-functional-252233                      kube-system
	bad15b0196dd6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   5c7fa243969ff       kindnet-n6cpg                               kube-system
	74d658d21b8f8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  11 minutes ago      Exited              kube-scheduler            1                   ee02d4a7e9cfa       kube-scheduler-functional-252233            kube-system
	1297638fcf21c       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  11 minutes ago      Exited              kube-controller-manager   1                   0f3fa1349ea44       kube-controller-manager-functional-252233   kube-system
	97be6ab9890b3       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  11 minutes ago      Exited              etcd                      1                   b7f3559c854ea       etcd-functional-252233                      kube-system
	7da0670c51034       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   61259fc4cdcf0       storage-provisioner                         kube-system
	4fd9b487e1367       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   5ac67ebd7484d       coredns-66bc5c9577-cnv7z                    kube-system
	28e19109a51ea       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  11 minutes ago      Exited              kube-proxy                1                   b8b3a0ea848e3       kube-proxy-mc7ft                            kube-system
	
	
	==> coredns [03b13a88094f0b13b3c373fb3c50f8ed9b5788876ee3af6e0c32c96277812f23] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58671 - 47446 "HINFO IN 3277755564921422371.276689778767722836. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015087245s
	
	
	==> coredns [4fd9b487e136749ab5bc21e474c5c58413d525e10416b781338a72efb5c769da] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57845 - 13676 "HINFO IN 8620269869114968458.7177673400970149118. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020746729s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-252233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-252233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=functional-252233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T06_18_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 06:18:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-252233
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 06:30:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 06:30:06 +0000   Fri, 05 Dec 2025 06:18:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 06:30:06 +0000   Fri, 05 Dec 2025 06:18:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 06:30:06 +0000   Fri, 05 Dec 2025 06:18:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 06:30:06 +0000   Fri, 05 Dec 2025 06:19:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-252233
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                4332ac60-fba2-4b7d-b994-3d1b3f2f56c3
	  Boot ID:                    6438d548-ea0a-487b-93bc-8af12c014d83
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-c8mht                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-cprsz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-cnv7z                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-252233                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-n6cpg                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-252233             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-252233    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mc7ft                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-252233             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-252233 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-252233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-252233 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-252233 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-252233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-252233 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-252233 event: Registered Node functional-252233 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-252233 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-252233 event: Registered Node functional-252233 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-252233 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-252233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-252233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-252233 event: Registered Node functional-252233 in Controller
	
	
	==> dmesg <==
	[Dec 5 03:17] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514036] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e4b2ff985c8c79a11659cab8fda4fb23fb6c33700b6e78dd724ced341209fb6] <==
	{"level":"warn","ts":"2025-12-05T06:20:14.671229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.712096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.753650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.807709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.835656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.890613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.907470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.936010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.942140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.959002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:14.984284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.004280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.022497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.037765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.062634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.091165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.112606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.133410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.160741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.185196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.204298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:20:15.298244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34792","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:30:13.095093Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1139}
	{"level":"info","ts":"2025-12-05T06:30:13.119021Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1139,"took":"23.629458ms","hash":3167747999,"current-db-size-bytes":3231744,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1441792,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-12-05T06:30:13.119078Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3167747999,"revision":1139,"compact-revision":-1}
	
	
	==> etcd [97be6ab9890b3cd02177a91066ce14109cd98cf02bda456588bbcc7e5223d3d2] <==
	{"level":"warn","ts":"2025-12-05T06:19:31.398892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:19:31.421761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:19:31.444164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:19:31.474501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:19:31.486880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:19:31.507393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T06:19:31.553210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54904","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T06:19:55.604908Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-05T06:19:55.604975Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-252233","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-05T06:19:55.605100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T06:19:55.741536Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-05T06:19:55.741736Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T06:19:55.741776Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-05T06:19:55.741787Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-05T06:19:55.741843Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T06:19:55.741859Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-05T06:19:55.741869Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-05T06:19:55.741897Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T06:19:55.741916Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-05T06:19:55.741951Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-05T06:19:55.741965Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-05T06:19:55.745898Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-05T06:19:55.745990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T06:19:55.746067Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-05T06:19:55.746096Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-252233","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 06:30:50 up  3:12,  0 user,  load average: 0.11, 0.39, 0.91
	Linux functional-252233 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [550cc9aab76cc44680d528ba3e3a81050b039404cccd3b382036e57850f71161] <==
	I1205 06:28:47.817877       1 main.go:301] handling current node
	I1205 06:28:57.818308       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:28:57.818341       1 main.go:301] handling current node
	I1205 06:29:07.825915       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:29:07.825951       1 main.go:301] handling current node
	I1205 06:29:17.817775       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:29:17.817824       1 main.go:301] handling current node
	I1205 06:29:27.818197       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:29:27.818259       1 main.go:301] handling current node
	I1205 06:29:37.822877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:29:37.822911       1 main.go:301] handling current node
	I1205 06:29:47.818326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:29:47.818362       1 main.go:301] handling current node
	I1205 06:29:57.817977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:29:57.818012       1 main.go:301] handling current node
	I1205 06:30:07.818482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:30:07.818519       1 main.go:301] handling current node
	I1205 06:30:17.818831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:30:17.818938       1 main.go:301] handling current node
	I1205 06:30:27.817967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:30:27.818079       1 main.go:301] handling current node
	I1205 06:30:37.826456       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:30:37.826490       1 main.go:301] handling current node
	I1205 06:30:47.818642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:30:47.818678       1 main.go:301] handling current node
	
	
	==> kindnet [bad15b0196dd6c86b5b5ed1d4b25c0cbe7cceb55af916d90672c96676d8ac439] <==
	I1205 06:19:28.139717       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 06:19:28.139942       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1205 06:19:28.140069       1 main.go:148] setting mtu 1500 for CNI 
	I1205 06:19:28.140080       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 06:19:28.140093       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T06:19:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 06:19:28.352452       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 06:19:28.352545       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 06:19:28.352581       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 06:19:28.353516       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 06:19:32.553860       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 06:19:32.553885       1 metrics.go:72] Registering metrics
	I1205 06:19:32.553994       1 controller.go:711] "Syncing nftables rules"
	I1205 06:19:38.346551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:19:38.346605       1 main.go:301] handling current node
	I1205 06:19:48.347330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 06:19:48.347371       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8a7fe69e1988d88dd2052a6ca1d310a449726063b90a318e29c6c9d636baf389] <==
	I1205 06:20:16.488524       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1205 06:20:16.488496       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 06:20:16.519158       1 aggregator.go:171] initial CRD sync complete...
	I1205 06:20:16.522399       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 06:20:16.522452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 06:20:16.522482       1 cache.go:39] Caches are synced for autoregister controller
	E1205 06:20:16.572632       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 06:20:17.061242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 06:20:17.258042       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 06:20:18.325864       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1205 06:20:18.473057       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1205 06:20:18.545705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 06:20:18.553778       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 06:20:19.357490       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 06:20:19.624731       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 06:20:19.672205       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 06:20:33.557707       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.69.195"}
	E1205 06:20:37.376724       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1205 06:20:39.688486       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.254.53"}
	I1205 06:20:48.471276       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.124.250"}
	E1205 06:20:57.421182       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49436: use of closed network connection
	E1205 06:20:58.433513       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1205 06:21:05.734989       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42426: use of closed network connection
	I1205 06:21:05.949457       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.172.74"}
	I1205 06:30:16.402591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1297638fcf21c376c6a3864bf22845b59b693449b9f1e442362cb0189253bdbb] <==
	I1205 06:19:35.707170       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1205 06:19:35.710156       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 06:19:35.710276       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 06:19:35.710408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:19:35.710445       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 06:19:35.710472       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 06:19:35.710745       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1205 06:19:35.714267       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:19:35.714649       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 06:19:35.714724       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 06:19:35.714794       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-252233"
	I1205 06:19:35.714835       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 06:19:35.718055       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 06:19:35.718516       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:19:35.723733       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 06:19:35.728024       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1205 06:19:35.732819       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1205 06:19:35.736067       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 06:19:35.738285       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:19:35.754854       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 06:19:35.756110       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1205 06:19:35.756484       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 06:19:35.756541       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 06:19:35.762628       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 06:19:35.765928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [ddf23714abcb5e0a646cdb19d2be911e1d276a627f2bfc6ee5019bfcb793deaf] <==
	I1205 06:20:19.331647       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 06:20:19.339414       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:20:19.339509       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1205 06:20:19.339523       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 06:20:19.339533       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1205 06:20:19.339542       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 06:20:19.343812       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 06:20:19.354448       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 06:20:19.365609       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 06:20:19.365686       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1205 06:20:19.366136       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 06:20:19.369361       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 06:20:19.369594       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 06:20:19.377115       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1205 06:20:19.377242       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1205 06:20:19.377291       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 06:20:19.377321       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1205 06:20:19.377359       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1205 06:20:19.382844       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1205 06:20:19.383954       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 06:20:19.387280       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1205 06:20:19.388752       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 06:20:19.395050       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 06:20:19.395146       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 06:20:19.395181       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [28e19109a51eaeb8461a3ad3ed1735bdc6ad9a797bb10d8540a4920450885c5c] <==
	I1205 06:19:27.886846       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:19:28.655572       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:19:32.515378       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:19:32.536684       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:19:32.536787       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:19:32.807628       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:19:32.807684       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:19:32.823627       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:19:32.823958       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:19:32.823983       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:19:32.825461       1 config.go:200] "Starting service config controller"
	I1205 06:19:32.825483       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:19:32.825506       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:19:32.825510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:19:32.825521       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:19:32.825525       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:19:32.836710       1 config.go:309] "Starting node config controller"
	I1205 06:19:32.836748       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:19:32.836759       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:19:32.925750       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:19:32.925791       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:19:32.925826       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [3f937f4c796f9ef8bf3f925d2297351fbad119ad23b4ae02b8283260e411106c] <==
	I1205 06:20:17.645054       1 server_linux.go:53] "Using iptables proxy"
	I1205 06:20:17.721730       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 06:20:17.824163       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 06:20:17.824323       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1205 06:20:17.824461       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 06:20:17.859704       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 06:20:17.859823       1 server_linux.go:132] "Using iptables Proxier"
	I1205 06:20:17.877506       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 06:20:17.877795       1 server.go:527] "Version info" version="v1.34.2"
	I1205 06:20:17.877814       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:20:17.887457       1 config.go:200] "Starting service config controller"
	I1205 06:20:17.887477       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 06:20:17.887493       1 config.go:106] "Starting endpoint slice config controller"
	I1205 06:20:17.887497       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 06:20:17.887508       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 06:20:17.887512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 06:20:17.896274       1 config.go:309] "Starting node config controller"
	I1205 06:20:17.896367       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 06:20:17.896400       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 06:20:17.999151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 06:20:17.999199       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 06:20:17.999238       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [74d658d21b8f8d10a9787dc4cd9702840c823b1685bbb1e46af5681e986ca49f] <==
	I1205 06:19:29.651701       1 serving.go:386] Generated self-signed cert in-memory
	W1205 06:19:32.342216       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 06:19:32.342242       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 06:19:32.342252       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 06:19:32.342259       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 06:19:32.441671       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 06:19:32.478737       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:19:32.488342       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 06:19:32.488495       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:19:32.488514       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:19:32.488535       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 06:19:32.589321       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:19:55.588627       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1205 06:19:55.598650       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1205 06:19:55.598706       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1205 06:19:55.598739       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:19:55.600063       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1205 06:19:55.600109       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b22c489f0e3fa07e937f34eb5c76ed1d18aa7d1dd8114e907a50e7f66c424124] <==
	I1205 06:20:14.001648       1 serving.go:386] Generated self-signed cert in-memory
	W1205 06:20:16.358725       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 06:20:16.358758       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 06:20:16.358769       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 06:20:16.358777       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 06:20:16.471047       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 06:20:16.471140       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 06:20:16.489171       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:20:16.489274       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 06:20:16.495069       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 06:20:16.495174       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 06:20:16.589780       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 05 06:28:10 functional-252233 kubelet[3885]: E1205 06:28:10.184546    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:28:22 functional-252233 kubelet[3885]: E1205 06:28:22.184129    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:28:25 functional-252233 kubelet[3885]: E1205 06:28:25.184310    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:28:35 functional-252233 kubelet[3885]: E1205 06:28:35.184002    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:28:40 functional-252233 kubelet[3885]: E1205 06:28:40.183905    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:28:46 functional-252233 kubelet[3885]: E1205 06:28:46.184609    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:28:52 functional-252233 kubelet[3885]: E1205 06:28:52.184403    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:28:59 functional-252233 kubelet[3885]: E1205 06:28:59.183940    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:29:03 functional-252233 kubelet[3885]: E1205 06:29:03.186251    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:29:14 functional-252233 kubelet[3885]: E1205 06:29:14.184574    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:29:18 functional-252233 kubelet[3885]: E1205 06:29:18.184104    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:29:28 functional-252233 kubelet[3885]: E1205 06:29:28.184277    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:29:29 functional-252233 kubelet[3885]: E1205 06:29:29.184113    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:29:42 functional-252233 kubelet[3885]: E1205 06:29:42.184632    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:29:43 functional-252233 kubelet[3885]: E1205 06:29:43.185032    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:29:53 functional-252233 kubelet[3885]: E1205 06:29:53.184844    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:29:57 functional-252233 kubelet[3885]: E1205 06:29:57.184654    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:30:08 functional-252233 kubelet[3885]: E1205 06:30:08.184551    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:30:08 functional-252233 kubelet[3885]: E1205 06:30:08.184627    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:30:22 functional-252233 kubelet[3885]: E1205 06:30:22.184165    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:30:23 functional-252233 kubelet[3885]: E1205 06:30:23.184579    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:30:34 functional-252233 kubelet[3885]: E1205 06:30:34.184116    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:30:35 functional-252233 kubelet[3885]: E1205 06:30:35.186097    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	Dec 05 06:30:48 functional-252233 kubelet[3885]: E1205 06:30:48.183868    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cprsz" podUID="2489e9e8-e3a8-40e9-b275-d19ed8eed261"
	Dec 05 06:30:48 functional-252233 kubelet[3885]: E1205 06:30:48.184171    3885 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c8mht" podUID="0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4"
	
	
	==> storage-provisioner [7da0670c510348a64f970eea198eec9aa72012a729a38f7d277ad109caafbce2] <==
	I1205 06:19:28.008428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 06:19:32.476514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 06:19:32.476567       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1205 06:19:32.581932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:36.056103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:40.316533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:43.914347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:46.967629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:49.990073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:49.995496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 06:19:49.995647       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 06:19:49.995974       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-252233_3aabbd7c-d69f-429f-b3da-42a24f8ac5f8!
	I1205 06:19:49.996847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7b818742-55f0-4b04-ac26-e658f8bc79f1", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-252233_3aabbd7c-d69f-429f-b3da-42a24f8ac5f8 became leader
	W1205 06:19:50.014058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:50.027995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1205 06:19:50.096419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-252233_3aabbd7c-d69f-429f-b3da-42a24f8ac5f8!
	W1205 06:19:52.030798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:52.038022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:54.041688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:19:54.046860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ef28bc09aa5cf50a4017cc0c2988d2b58a3f446e98d74d8afda4715987509a5c] <==
	W1205 06:30:25.661613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:27.665321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:27.670676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:29.673402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:29.680311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:31.684301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:31.688849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:33.691442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:33.699058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:35.707051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:35.711696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:37.714443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:37.719332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:39.721912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:39.726169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:41.729851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:41.736812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:43.739417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:43.743803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:45.746732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:45.751114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:47.754268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:47.760780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:49.763632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1205 06:30:49.770647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-252233 -n functional-252233
helpers_test.go:269: (dbg) Run:  kubectl --context functional-252233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-c8mht hello-node-connect-7d85dfc575-cprsz
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-252233 describe pod hello-node-75c85bcc94-c8mht hello-node-connect-7d85dfc575-cprsz
helpers_test.go:290: (dbg) kubectl --context functional-252233 describe pod hello-node-75c85bcc94-c8mht hello-node-connect-7d85dfc575-cprsz:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-c8mht
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-252233/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:21:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lpj4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8lpj4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m45s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c8mht to functional-252233
	  Normal   Pulling    6m57s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m57s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m57s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m44s (x20 over 9m45s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m33s (x21 over 9m45s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-cprsz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-252233/192.168.49.2
	Start Time:       Fri, 05 Dec 2025 06:20:48 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qw7nr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qw7nr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cprsz to functional-252233
	  Normal   Pulling    6m54s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-252233 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-252233 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-c8mht" [0b1aebfe-7e38-49ba-9fc6-75c321bf7cc4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1205 06:21:29.255235  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:23:45.392209  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:24:13.096766  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:28:45.392336  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-252233 -n functional-252233
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-05 06:31:06.421278785 +0000 UTC m=+1206.489611530
functional_test.go:1460: (dbg) Run:  kubectl --context functional-252233 describe po hello-node-75c85bcc94-c8mht -n default
functional_test.go:1460: (dbg) kubectl --context functional-252233 describe po hello-node-75c85bcc94-c8mht -n default:
Name:             hello-node-75c85bcc94-c8mht
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-252233/192.168.49.2
Start Time:       Fri, 05 Dec 2025 06:21:05 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lpj4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8lpj4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c8mht to functional-252233
Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-252233 logs hello-node-75c85bcc94-c8mht -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-252233 logs hello-node-75c85bcc94-c8mht -n default: exit status 1 (129.678179ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-c8mht" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-252233 logs hello-node-75c85bcc94-c8mht -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 service --namespace=default --https --url hello-node: exit status 115 (520.202691ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31959
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-252233 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 service hello-node --url --format={{.IP}}: exit status 115 (491.648758ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-252233 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 service hello-node --url: exit status 115 (546.387364ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31959
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-252233 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31959
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image load --daemon kicbase/echo-server:functional-252233 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-252233" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image load --daemon kicbase/echo-server:functional-252233 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-252233" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-252233
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image load --daemon kicbase/echo-server:functional-252233 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-252233" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image save kicbase/echo-server:functional-252233 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1205 06:31:20.185839  472058 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:31:20.186132  472058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:20.186164  472058 out.go:374] Setting ErrFile to fd 2...
	I1205 06:31:20.186186  472058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:20.186522  472058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:31:20.187267  472058 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:31:20.187477  472058 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:31:20.188060  472058 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
	I1205 06:31:20.205685  472058 ssh_runner.go:195] Run: systemctl --version
	I1205 06:31:20.205749  472058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
	I1205 06:31:20.222769  472058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
	I1205 06:31:20.326146  472058 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1205 06:31:20.326232  472058 cache_images.go:255] Failed to load cached images for "functional-252233": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1205 06:31:20.326266  472058 cache_images.go:267] failed pushing to: functional-252233

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-252233
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image save --daemon kicbase/echo-server:functional-252233 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-252233
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-252233: exit status 1 (24.020281ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-252233

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-252233

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (509.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1205 06:33:45.396479  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:08.460497  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.247073  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.253564  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.265023  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.286471  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.328008  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.409566  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.571100  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:39.892805  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:40.534953  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:41.816388  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:44.377787  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:49.499585  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:35:59.741918  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:36:20.223378  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:37:01.186260  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:38:23.110563  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:38:45.391339  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m28.383887099s)

                                                
                                                
-- stdout --
	* [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - HTTP_PROXY=localhost:35825
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:35825 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-787602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-787602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448596s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001639162s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001639162s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 6 (342.200476ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:39:58.329053  479816 status.go:458] kubeconfig endpoint: get endpoint: "functional-787602" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-252233 image ls                                                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image save kicbase/echo-server:functional-252233 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image rm kicbase/echo-server:functional-252233 --alsologtostderr                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls                                                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/test/nested/copy/444147/hosts                                                                                         │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image save --daemon kicbase/echo-server:functional-252233 --alsologtostderr                                                             │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/444147.pem                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /usr/share/ca-certificates/444147.pem                                                                                      │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/4441472.pem                                                                                                 │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /usr/share/ca-certificates/4441472.pem                                                                                     │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format short --alsologtostderr                                                                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh pgrep buildkitd                                                                                                                     │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ image          │ functional-252233 image ls --format yaml --alsologtostderr                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr                                                    │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format json --alsologtostderr                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format table --alsologtostderr                                                                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls                                                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ delete         │ -p functional-252233                                                                                                                                      │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ start          │ -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:31:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:31:29.652377  473668 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:31:29.652498  473668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:29.652502  473668 out.go:374] Setting ErrFile to fd 2...
	I1205 06:31:29.652507  473668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:29.652734  473668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:31:29.653139  473668 out.go:368] Setting JSON to false
	I1205 06:31:29.653969  473668 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11617,"bootTime":1764904673,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:31:29.654027  473668 start.go:143] virtualization:  
	I1205 06:31:29.657976  473668 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:31:29.662102  473668 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:31:29.662287  473668 notify.go:221] Checking for updates...
	I1205 06:31:29.668348  473668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:31:29.671493  473668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:31:29.674474  473668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:31:29.677374  473668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:31:29.680444  473668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:31:29.683621  473668 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:31:29.720144  473668 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:31:29.720268  473668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:31:29.776434  473668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-05 06:31:29.767349692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:31:29.776531  473668 docker.go:319] overlay module found
	I1205 06:31:29.779672  473668 out.go:179] * Using the docker driver based on user configuration
	I1205 06:31:29.782548  473668 start.go:309] selected driver: docker
	I1205 06:31:29.782559  473668 start.go:927] validating driver "docker" against <nil>
	I1205 06:31:29.782572  473668 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:31:29.783285  473668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:31:29.836693  473668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-05 06:31:29.827825944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:31:29.836837  473668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:31:29.837065  473668 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:31:29.840003  473668 out.go:179] * Using Docker driver with root privileges
	I1205 06:31:29.842961  473668 cni.go:84] Creating CNI manager for ""
	I1205 06:31:29.843027  473668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:31:29.843034  473668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:31:29.843114  473668 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:31:29.846152  473668 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:31:29.849044  473668 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:31:29.852022  473668 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:31:29.854883  473668 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:31:29.854954  473668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:31:29.873778  473668 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:31:29.873790  473668 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:31:29.913948  473668 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:31:30.081700  473668 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:31:30.081950  473668 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082058  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:31:30.082069  473668 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.935µs
	I1205 06:31:30.082084  473668 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:31:30.082095  473668 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082106  473668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:31:30.082125  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:31:30.082130  473668 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.719µs
	I1205 06:31:30.082143  473668 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:31:30.082140  473668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json: {Name:mk42ccbad5152a3b84993d29628afafae2c2aa7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:30.082160  473668 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082255  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:31:30.082261  473668 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 109.031µs
	I1205 06:31:30.082267  473668 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:31:30.082278  473668 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082312  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:31:30.082317  473668 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 40.189µs
	I1205 06:31:30.082322  473668 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:31:30.082330  473668 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:31:30.082331  473668 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082354  473668 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082360  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:31:30.082365  473668 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 34.922µs
	I1205 06:31:30.082370  473668 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:31:30.082408  473668 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082451  473668 start.go:364] duration metric: took 86.926µs to acquireMachinesLock for "functional-787602"
	I1205 06:31:30.082459  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:31:30.082463  473668 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 56.804µs
	I1205 06:31:30.082468  473668 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:31:30.082479  473668 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082505  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:31:30.082509  473668 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 33.715µs
	I1205 06:31:30.082513  473668 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:31:30.082470  473668 start.go:93] Provisioning new machine with config: &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:31:30.082521  473668 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:31:30.082532  473668 start.go:125] createHost starting for "" (driver="docker")
	I1205 06:31:30.082554  473668 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:31:30.082558  473668 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 37.342µs
	I1205 06:31:30.082562  473668 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:31:30.082568  473668 cache.go:87] Successfully saved all images to host disk.
	I1205 06:31:30.088134  473668 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1205 06:31:30.088455  473668 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:35825 to docker env.
	I1205 06:31:30.088485  473668 start.go:159] libmachine.API.Create for "functional-787602" (driver="docker")
	I1205 06:31:30.088525  473668 client.go:173] LocalClient.Create starting
	I1205 06:31:30.088642  473668 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem
	I1205 06:31:30.088685  473668 main.go:143] libmachine: Decoding PEM data...
	I1205 06:31:30.088719  473668 main.go:143] libmachine: Parsing certificate...
	I1205 06:31:30.088785  473668 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem
	I1205 06:31:30.088810  473668 main.go:143] libmachine: Decoding PEM data...
	I1205 06:31:30.088821  473668 main.go:143] libmachine: Parsing certificate...
	I1205 06:31:30.089272  473668 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 06:31:30.107482  473668 cli_runner.go:211] docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 06:31:30.107560  473668 network_create.go:284] running [docker network inspect functional-787602] to gather additional debugging logs...
	I1205 06:31:30.107577  473668 cli_runner.go:164] Run: docker network inspect functional-787602
	W1205 06:31:30.125508  473668 cli_runner.go:211] docker network inspect functional-787602 returned with exit code 1
	I1205 06:31:30.125532  473668 network_create.go:287] error running [docker network inspect functional-787602]: docker network inspect functional-787602: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-787602 not found
	I1205 06:31:30.125547  473668 network_create.go:289] output of [docker network inspect functional-787602]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-787602 not found
	
	** /stderr **
	I1205 06:31:30.125661  473668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:31:30.143802  473668 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0c5d0}
	I1205 06:31:30.143840  473668 network_create.go:124] attempt to create docker network functional-787602 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 06:31:30.143901  473668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-787602 functional-787602
	I1205 06:31:30.206704  473668 network_create.go:108] docker network functional-787602 192.168.49.0/24 created
	I1205 06:31:30.206735  473668 kic.go:121] calculated static IP "192.168.49.2" for the "functional-787602" container
	I1205 06:31:30.206811  473668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 06:31:30.223026  473668 cli_runner.go:164] Run: docker volume create functional-787602 --label name.minikube.sigs.k8s.io=functional-787602 --label created_by.minikube.sigs.k8s.io=true
	I1205 06:31:30.242070  473668 oci.go:103] Successfully created a docker volume functional-787602
	I1205 06:31:30.242149  473668 cli_runner.go:164] Run: docker run --rm --name functional-787602-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-787602 --entrypoint /usr/bin/test -v functional-787602:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 06:31:30.768345  473668 oci.go:107] Successfully prepared a docker volume functional-787602
	I1205 06:31:30.768406  473668 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1205 06:31:30.768547  473668 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 06:31:30.768652  473668 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 06:31:30.823914  473668 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-787602 --name functional-787602 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-787602 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-787602 --network functional-787602 --ip 192.168.49.2 --volume functional-787602:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 06:31:31.132951  473668 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Running}}
	I1205 06:31:31.155794  473668 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:31:31.183786  473668 cli_runner.go:164] Run: docker exec functional-787602 stat /var/lib/dpkg/alternatives/iptables
	I1205 06:31:31.240648  473668 oci.go:144] the created container "functional-787602" has a running status.
	I1205 06:31:31.240668  473668 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa...
	I1205 06:31:31.388553  473668 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 06:31:31.418695  473668 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:31:31.442812  473668 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 06:31:31.442823  473668 kic_runner.go:114] Args: [docker exec --privileged functional-787602 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 06:31:31.502286  473668 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:31:31.525813  473668 machine.go:94] provisionDockerMachine start ...
	I1205 06:31:31.525901  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:31.558785  473668 main.go:143] libmachine: Using SSH client type: native
	I1205 06:31:31.559119  473668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:31:31.559126  473668 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:31:31.559864  473668 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56520->127.0.0.1:33148: read: connection reset by peer
	I1205 06:31:34.714054  473668 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:31:34.714067  473668 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:31:34.714141  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:34.732325  473668 main.go:143] libmachine: Using SSH client type: native
	I1205 06:31:34.732630  473668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:31:34.732639  473668 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:31:34.892334  473668 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:31:34.892400  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:34.912027  473668 main.go:143] libmachine: Using SSH client type: native
	I1205 06:31:34.912359  473668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:31:34.912373  473668 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:31:35.066766  473668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:31:35.066783  473668 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:31:35.066802  473668 ubuntu.go:190] setting up certificates
	I1205 06:31:35.066810  473668 provision.go:84] configureAuth start
	I1205 06:31:35.066874  473668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:31:35.085432  473668 provision.go:143] copyHostCerts
	I1205 06:31:35.085495  473668 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:31:35.085503  473668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:31:35.085588  473668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:31:35.085693  473668 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:31:35.085697  473668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:31:35.085723  473668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:31:35.085772  473668 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:31:35.085776  473668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:31:35.085798  473668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:31:35.085843  473668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:31:35.367245  473668 provision.go:177] copyRemoteCerts
	I1205 06:31:35.367298  473668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:31:35.367337  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:35.386110  473668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:31:35.490032  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:31:35.507263  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:31:35.525199  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:31:35.542365  473668 provision.go:87] duration metric: took 475.542312ms to configureAuth
	I1205 06:31:35.542401  473668 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:31:35.542597  473668 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:31:35.542694  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:35.559275  473668 main.go:143] libmachine: Using SSH client type: native
	I1205 06:31:35.559589  473668 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:31:35.559602  473668 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:31:35.869144  473668 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:31:35.869156  473668 machine.go:97] duration metric: took 4.343330679s to provisionDockerMachine
	I1205 06:31:35.869166  473668 client.go:176] duration metric: took 5.780636044s to LocalClient.Create
	I1205 06:31:35.869184  473668 start.go:167] duration metric: took 5.780700192s to libmachine.API.Create "functional-787602"
	I1205 06:31:35.869190  473668 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:31:35.869201  473668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:31:35.869270  473668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:31:35.869307  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:35.887480  473668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:31:35.994343  473668 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:31:35.997486  473668 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:31:35.997504  473668 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:31:35.997513  473668 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:31:35.997566  473668 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:31:35.997654  473668 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:31:35.997731  473668 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:31:35.997779  473668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:31:36.009083  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:31:36.029119  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:31:36.048215  473668 start.go:296] duration metric: took 179.011192ms for postStartSetup
	I1205 06:31:36.048594  473668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:31:36.066274  473668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:31:36.066580  473668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:31:36.066624  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:36.083707  473668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:31:36.183406  473668 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:31:36.187880  473668 start.go:128] duration metric: took 6.105331669s to createHost
	I1205 06:31:36.187895  473668 start.go:83] releasing machines lock for "functional-787602", held for 6.105437344s
	I1205 06:31:36.187970  473668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:31:36.209118  473668 out.go:179] * Found network options:
	I1205 06:31:36.211917  473668 out.go:179]   - HTTP_PROXY=localhost:35825
	W1205 06:31:36.214866  473668 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1205 06:31:36.217801  473668 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1205 06:31:36.220644  473668 ssh_runner.go:195] Run: cat /version.json
	I1205 06:31:36.220710  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:36.220723  473668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:31:36.220783  473668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:31:36.239850  473668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:31:36.239927  473668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:31:36.432701  473668 ssh_runner.go:195] Run: systemctl --version
	I1205 06:31:36.439335  473668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:31:36.473765  473668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:31:36.478230  473668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:31:36.478297  473668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:31:36.506015  473668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 06:31:36.506029  473668 start.go:496] detecting cgroup driver to use...
	I1205 06:31:36.506063  473668 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:31:36.506111  473668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:31:36.524492  473668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:31:36.537249  473668 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:31:36.537308  473668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:31:36.554831  473668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:31:36.574522  473668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:31:36.693618  473668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:31:36.820877  473668 docker.go:234] disabling docker service ...
	I1205 06:31:36.820940  473668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:31:36.842021  473668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:31:36.855762  473668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:31:36.987415  473668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:31:37.114180  473668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:31:37.126833  473668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:31:37.141003  473668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:31:37.141077  473668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:31:37.150181  473668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:31:37.150239  473668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:31:37.159215  473668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:31:37.168311  473668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:31:37.177418  473668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:31:37.185352  473668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:31:37.194451  473668 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:31:37.208289  473668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:31:37.217427  473668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:31:37.225345  473668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:31:37.233115  473668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:31:37.350850  473668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:31:37.520581  473668 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:31:37.520654  473668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:31:37.524580  473668 start.go:564] Will wait 60s for crictl version
	I1205 06:31:37.524632  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:37.528317  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:31:37.555938  473668 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:31:37.556027  473668 ssh_runner.go:195] Run: crio --version
	I1205 06:31:37.586468  473668 ssh_runner.go:195] Run: crio --version
	I1205 06:31:37.619220  473668 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:31:37.622014  473668 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:31:37.638234  473668 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:31:37.642430  473668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:31:37.652613  473668 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:31:37.652710  473668 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:31:37.652751  473668 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:31:37.676766  473668 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 06:31:37.676789  473668 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 06:31:37.676848  473668 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:31:37.677078  473668 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:31:37.677174  473668 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:31:37.677285  473668 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:31:37.677386  473668 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:31:37.677483  473668 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 06:31:37.677571  473668 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:31:37.677669  473668 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:31:37.678756  473668 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:31:37.678916  473668 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:31:37.679218  473668 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:31:37.679736  473668 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:31:37.679982  473668 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 06:31:37.680124  473668 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:31:37.680974  473668 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:31:37.681263  473668 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:31:38.005977  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 06:31:38.017568  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:31:38.018073  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:31:38.022595  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 06:31:38.023107  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:31:38.040556  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:31:38.054433  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:31:38.105682  473668 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 06:31:38.105714  473668 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 06:31:38.105768  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:38.122684  473668 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 06:31:38.122724  473668 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:31:38.122781  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:38.145211  473668 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 06:31:38.145255  473668 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:31:38.145306  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:38.190911  473668 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 06:31:38.190942  473668 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 06:31:38.190993  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:38.198463  473668 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 06:31:38.198506  473668 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:31:38.198563  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:38.198626  473668 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 06:31:38.198650  473668 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:31:38.198692  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:38.203542  473668 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 06:31:38.203571  473668 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:31:38.203623  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:38.203706  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 06:31:38.203761  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:31:38.203806  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:31:38.203849  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 06:31:38.207693  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:31:38.207749  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:31:38.294651  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:31:38.294739  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 06:31:38.294779  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 06:31:38.294802  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:31:38.294848  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:31:38.295497  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:31:38.295588  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:31:38.404208  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 06:31:38.404273  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:31:38.404316  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 06:31:38.404379  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 06:31:38.404450  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 06:31:38.408557  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 06:31:38.408643  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 06:31:38.518647  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 06:31:38.518664  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 06:31:38.518748  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:31:38.518757  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:31:38.518851  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 06:31:38.518857  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 06:31:38.518903  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 06:31:38.518925  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 06:31:38.518967  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:31:38.521992  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 06:31:38.522096  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:31:38.522188  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 06:31:38.522241  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:31:38.554827  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 06:31:38.554861  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 06:31:38.554862  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 06:31:38.554882  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 06:31:38.554918  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 06:31:38.554925  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 06:31:38.554979  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 06:31:38.555066  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:31:38.555120  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 06:31:38.555130  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 06:31:38.555172  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 06:31:38.555179  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 06:31:38.555209  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 06:31:38.555218  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1205 06:31:38.608874  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 06:31:38.608903  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 06:31:38.670673  473668 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 06:31:38.670739  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1205 06:31:38.964735  473668 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 06:31:38.964896  473668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:31:39.067196  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 06:31:39.067219  473668 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:31:39.067270  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 06:31:39.149983  473668 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 06:31:39.150016  473668 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:31:39.150065  473668 ssh_runner.go:195] Run: which crictl
	I1205 06:31:40.619383  473668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.552092221s)
	I1205 06:31:40.619399  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 06:31:40.619416  473668 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:31:40.619461  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1205 06:31:40.619521  473668 ssh_runner.go:235] Completed: which crictl: (1.469448912s)
	I1205 06:31:40.619547  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:31:41.827510  473668 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.207941728s)
	I1205 06:31:41.827577  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:31:41.827710  473668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.208239324s)
	I1205 06:31:41.827721  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 06:31:41.827736  473668 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:31:41.827781  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 06:31:41.859961  473668 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:31:43.145715  473668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.317912084s)
	I1205 06:31:43.145732  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 06:31:43.145751  473668 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:31:43.145801  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1205 06:31:43.145873  473668 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.285900206s)
	I1205 06:31:43.145902  473668 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 06:31:43.145969  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:31:44.928922  473668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.783100748s)
	I1205 06:31:44.928939  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 06:31:44.928956  473668 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:31:44.929003  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 06:31:44.929008  473668 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.783020295s)
	I1205 06:31:44.929040  473668 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 06:31:44.929123  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 06:31:46.366813  473668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.437789682s)
	I1205 06:31:46.366830  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 06:31:46.366848  473668 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:31:46.366898  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 06:31:47.504022  473668 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.137103548s)
	I1205 06:31:47.504038  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 06:31:47.504055  473668 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:31:47.504105  473668 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 06:31:48.061807  473668 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 06:31:48.061832  473668 cache_images.go:125] Successfully loaded all cached images
	I1205 06:31:48.061837  473668 cache_images.go:94] duration metric: took 10.385035329s to LoadCachedImages
	I1205 06:31:48.061848  473668 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:31:48.061936  473668 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:31:48.062022  473668 ssh_runner.go:195] Run: crio config
	I1205 06:31:48.136895  473668 cni.go:84] Creating CNI manager for ""
	I1205 06:31:48.136906  473668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:31:48.136928  473668 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:31:48.136950  473668 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:31:48.137084  473668 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:31:48.137155  473668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:31:48.145072  473668 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 06:31:48.145138  473668 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:31:48.152972  473668 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 06:31:48.153027  473668 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 06:31:48.153071  473668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:31:48.153085  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 06:31:48.153172  473668 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 06:31:48.153237  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 06:31:48.160288  473668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 06:31:48.160315  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 06:31:48.172775  473668 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 06:31:48.172831  473668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 06:31:48.172841  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 06:31:48.189796  473668 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 06:31:48.189828  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 06:31:48.962865  473668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:31:48.971707  473668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:31:48.984744  473668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:31:48.999194  473668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 06:31:49.013124  473668 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:31:49.016860  473668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 06:31:49.026619  473668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:31:49.145952  473668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:31:49.161521  473668 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:31:49.161532  473668 certs.go:195] generating shared ca certs ...
	I1205 06:31:49.161548  473668 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:49.161696  473668 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:31:49.161740  473668 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:31:49.161746  473668 certs.go:257] generating profile certs ...
	I1205 06:31:49.161802  473668 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:31:49.161812  473668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt with IP's: []
	I1205 06:31:49.299206  473668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt ...
	I1205 06:31:49.299224  473668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: {Name:mk73bc1a4f1afd05709f8c96f3133ea5120daaaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:49.299435  473668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key ...
	I1205 06:31:49.299442  473668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key: {Name:mk5ebe3405bbf71e79c7b581971215fd1a6f5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:49.299533  473668 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:31:49.299545  473668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt.16d29bb2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 06:31:49.597845  473668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt.16d29bb2 ...
	I1205 06:31:49.597860  473668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt.16d29bb2: {Name:mk02a90d5b7df7feac2f82028435edd132ecf5db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:49.598056  473668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2 ...
	I1205 06:31:49.598065  473668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2: {Name:mkeebf7a663972376e0312024120376b6c6a10b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:49.598148  473668 certs.go:382] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt.16d29bb2 -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt
	I1205 06:31:49.598227  473668 certs.go:386] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2 -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key
	I1205 06:31:49.598287  473668 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:31:49.598299  473668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt with IP's: []
	I1205 06:31:49.871901  473668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt ...
	I1205 06:31:49.871917  473668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt: {Name:mk362c8eeffeda2c3afb1bb960d416adc61a15be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:49.872115  473668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key ...
	I1205 06:31:49.872123  473668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key: {Name:mk7c30ba63860c6900cebcdd3715a54bfc1235c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:31:49.872325  473668 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:31:49.872365  473668 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:31:49.872372  473668 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:31:49.872399  473668 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:31:49.872421  473668 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:31:49.872445  473668 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:31:49.872487  473668 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:31:49.873079  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:31:49.894128  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:31:49.913210  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:31:49.932671  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:31:49.950494  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:31:49.969350  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:31:49.989297  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:31:50.015759  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:31:50.040072  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:31:50.063118  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:31:50.084537  473668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:31:50.106156  473668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:31:50.120436  473668 ssh_runner.go:195] Run: openssl version
	I1205 06:31:50.126774  473668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:31:50.134959  473668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:31:50.144773  473668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:31:50.149583  473668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:31:50.149653  473668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:31:50.196247  473668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:31:50.203821  473668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/444147.pem /etc/ssl/certs/51391683.0
	I1205 06:31:50.211179  473668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:31:50.218787  473668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:31:50.226322  473668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:31:50.230205  473668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:31:50.230265  473668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:31:50.271989  473668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:31:50.279422  473668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4441472.pem /etc/ssl/certs/3ec20f2e.0
	I1205 06:31:50.286689  473668 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:31:50.293962  473668 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:31:50.301449  473668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:31:50.305003  473668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:31:50.305064  473668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:31:50.346031  473668 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:31:50.353835  473668 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 06:31:50.361850  473668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:31:50.365698  473668 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 06:31:50.365746  473668 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:31:50.365814  473668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:31:50.365883  473668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:31:50.397035  473668 cri.go:89] found id: ""
	I1205 06:31:50.397101  473668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:31:50.405063  473668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:31:50.413135  473668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:31:50.413219  473668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:31:50.421357  473668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:31:50.421367  473668 kubeadm.go:158] found existing configuration files:
	
	I1205 06:31:50.421429  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:31:50.429421  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:31:50.429484  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:31:50.436924  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:31:50.444848  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:31:50.444919  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:31:50.452374  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:31:50.460302  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:31:50.460357  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:31:50.468282  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:31:50.476203  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:31:50.476265  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:31:50.484093  473668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:31:50.608312  473668 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:31:50.608728  473668 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:31:50.673578  473668 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:35:55.357825  473668 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:35:55.357852  473668 kubeadm.go:319] 
	I1205 06:35:55.357922  473668 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:35:55.361813  473668 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:35:55.361862  473668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:35:55.361950  473668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:35:55.362004  473668 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:35:55.362040  473668 kubeadm.go:319] OS: Linux
	I1205 06:35:55.362084  473668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:35:55.362131  473668 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:35:55.362177  473668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:35:55.362224  473668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:35:55.362271  473668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:35:55.362318  473668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:35:55.362362  473668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:35:55.362427  473668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:35:55.362472  473668 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:35:55.362543  473668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:35:55.362654  473668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:35:55.362743  473668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:35:55.362804  473668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:35:55.365900  473668 out.go:252]   - Generating certificates and keys ...
	I1205 06:35:55.365984  473668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:35:55.366049  473668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:35:55.366114  473668 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 06:35:55.366178  473668 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 06:35:55.366237  473668 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 06:35:55.366285  473668 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 06:35:55.366337  473668 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 06:35:55.366498  473668 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-787602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:35:55.366556  473668 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 06:35:55.366696  473668 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-787602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 06:35:55.366764  473668 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 06:35:55.366838  473668 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 06:35:55.366889  473668 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 06:35:55.366944  473668 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:35:55.366994  473668 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:35:55.367050  473668 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:35:55.367106  473668 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:35:55.367180  473668 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:35:55.367241  473668 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:35:55.367328  473668 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:35:55.367393  473668 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:35:55.370323  473668 out.go:252]   - Booting up control plane ...
	I1205 06:35:55.370464  473668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:35:55.370565  473668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:35:55.370644  473668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:35:55.370765  473668 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:35:55.370859  473668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:35:55.370961  473668 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:35:55.371058  473668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:35:55.371096  473668 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:35:55.371230  473668 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:35:55.371357  473668 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:35:55.371431  473668 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000448596s
	I1205 06:35:55.371435  473668 kubeadm.go:319] 
	I1205 06:35:55.371495  473668 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:35:55.371537  473668 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:35:55.371640  473668 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:35:55.371643  473668 kubeadm.go:319] 
	I1205 06:35:55.371775  473668 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:35:55.371818  473668 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:35:55.371849  473668 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:35:55.371858  473668 kubeadm.go:319] 
	W1205 06:35:55.371969  473668 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-787602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-787602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448596s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:35:55.372067  473668 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:35:55.779713  473668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:35:55.792352  473668 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:35:55.792408  473668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:35:55.800241  473668 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:35:55.800251  473668 kubeadm.go:158] found existing configuration files:
	
	I1205 06:35:55.800306  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:35:55.808013  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:35:55.808071  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:35:55.815405  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:35:55.823047  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:35:55.823103  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:35:55.830713  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:35:55.838515  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:35:55.838572  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:35:55.846147  473668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:35:55.854000  473668 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:35:55.854056  473668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:35:55.861695  473668 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:35:55.905275  473668 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:35:55.905429  473668 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:35:55.976539  473668 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:35:55.976604  473668 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:35:55.976639  473668 kubeadm.go:319] OS: Linux
	I1205 06:35:55.976682  473668 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:35:55.976729  473668 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:35:55.976775  473668 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:35:55.976822  473668 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:35:55.976869  473668 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:35:55.976915  473668 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:35:55.976960  473668 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:35:55.977006  473668 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:35:55.977060  473668 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:35:56.043364  473668 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:35:56.043520  473668 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:35:56.043620  473668 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:35:56.058824  473668 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:35:56.064381  473668 out.go:252]   - Generating certificates and keys ...
	I1205 06:35:56.064506  473668 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:35:56.064583  473668 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:35:56.064680  473668 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:35:56.064746  473668 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:35:56.064816  473668 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:35:56.064870  473668 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:35:56.064932  473668 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:35:56.064994  473668 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:35:56.065075  473668 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:35:56.065147  473668 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:35:56.065184  473668 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:35:56.065240  473668 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:35:56.192567  473668 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:35:56.486082  473668 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:35:56.599069  473668 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:35:57.189680  473668 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:35:57.355866  473668 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:35:57.356614  473668 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:35:57.359278  473668 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:35:57.362327  473668 out.go:252]   - Booting up control plane ...
	I1205 06:35:57.362436  473668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:35:57.362513  473668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:35:57.362581  473668 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:35:57.376713  473668 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:35:57.376815  473668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:35:57.386275  473668 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:35:57.386363  473668 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:35:57.386435  473668 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:35:57.527086  473668 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:35:57.527199  473668 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:39:57.526996  473668 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001639162s
	I1205 06:39:57.527022  473668 kubeadm.go:319] 
	I1205 06:39:57.527078  473668 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:39:57.527110  473668 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:39:57.527213  473668 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:39:57.527219  473668 kubeadm.go:319] 
	I1205 06:39:57.527322  473668 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:39:57.527353  473668 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:39:57.527382  473668 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:39:57.527385  473668 kubeadm.go:319] 
	I1205 06:39:57.528755  473668 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:39:57.529177  473668 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:39:57.529284  473668 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:39:57.529517  473668 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:39:57.529522  473668 kubeadm.go:319] 
	I1205 06:39:57.529589  473668 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:39:57.529645  473668 kubeadm.go:403] duration metric: took 8m7.163901947s to StartCluster
	I1205 06:39:57.529683  473668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:39:57.529746  473668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:39:57.557166  473668 cri.go:89] found id: ""
	I1205 06:39:57.557182  473668 logs.go:282] 0 containers: []
	W1205 06:39:57.557189  473668 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:39:57.557196  473668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:39:57.557261  473668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:39:57.589077  473668 cri.go:89] found id: ""
	I1205 06:39:57.589091  473668 logs.go:282] 0 containers: []
	W1205 06:39:57.589098  473668 logs.go:284] No container was found matching "etcd"
	I1205 06:39:57.589104  473668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:39:57.589161  473668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:39:57.615004  473668 cri.go:89] found id: ""
	I1205 06:39:57.615018  473668 logs.go:282] 0 containers: []
	W1205 06:39:57.615025  473668 logs.go:284] No container was found matching "coredns"
	I1205 06:39:57.615030  473668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:39:57.615088  473668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:39:57.644936  473668 cri.go:89] found id: ""
	I1205 06:39:57.644950  473668 logs.go:282] 0 containers: []
	W1205 06:39:57.644958  473668 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:39:57.644963  473668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:39:57.645022  473668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:39:57.672615  473668 cri.go:89] found id: ""
	I1205 06:39:57.672629  473668 logs.go:282] 0 containers: []
	W1205 06:39:57.672636  473668 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:39:57.672641  473668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:39:57.672709  473668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:39:57.699175  473668 cri.go:89] found id: ""
	I1205 06:39:57.699188  473668 logs.go:282] 0 containers: []
	W1205 06:39:57.699196  473668 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:39:57.699201  473668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:39:57.699259  473668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:39:57.727072  473668 cri.go:89] found id: ""
	I1205 06:39:57.727086  473668 logs.go:282] 0 containers: []
	W1205 06:39:57.727094  473668 logs.go:284] No container was found matching "kindnet"
	I1205 06:39:57.727102  473668 logs.go:123] Gathering logs for kubelet ...
	I1205 06:39:57.727112  473668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:39:57.792958  473668 logs.go:123] Gathering logs for dmesg ...
	I1205 06:39:57.792978  473668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:39:57.809098  473668 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:39:57.809114  473668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:39:57.878949  473668 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:39:57.870523    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.871286    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.872924    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.873595    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.875250    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:39:57.870523    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.871286    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.872924    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.873595    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:57.875250    5511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:39:57.878963  473668 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:39:57.878974  473668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:39:57.920652  473668 logs.go:123] Gathering logs for container status ...
	I1205 06:39:57.920675  473668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 06:39:57.953030  473668 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001639162s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:39:57.953103  473668 out.go:285] * 
	W1205 06:39:57.953166  473668 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001639162s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:39:57.953184  473668 out.go:285] * 
	W1205 06:39:57.955329  473668 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:39:57.960265  473668 out.go:203] 
	W1205 06:39:57.964145  473668 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001639162s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:39:57.964188  473668 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:39:57.964209  473668 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:39:57.967365  473668 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:31:38 functional-787602 crio[840]: time="2025-12-05T06:31:38.551534077Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=d563d7b4-f9da-4453-bdb7-ce1cbbfc3186 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:38 functional-787602 crio[840]: time="2025-12-05T06:31:38.551578295Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=d563d7b4-f9da-4453-bdb7-ce1cbbfc3186 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:40 functional-787602 crio[840]: time="2025-12-05T06:31:40.649703013Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fe5281b1-99a2-45f8-aeac-32275bdfb05a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:40 functional-787602 crio[840]: time="2025-12-05T06:31:40.650169365Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=fe5281b1-99a2-45f8-aeac-32275bdfb05a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:40 functional-787602 crio[840]: time="2025-12-05T06:31:40.650231897Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=fe5281b1-99a2-45f8-aeac-32275bdfb05a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:41 functional-787602 crio[840]: time="2025-12-05T06:31:41.855335166Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=89806364-b4d1-47f3-b8a6-03a172403c88 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:41 functional-787602 crio[840]: time="2025-12-05T06:31:41.855615719Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=89806364-b4d1-47f3-b8a6-03a172403c88 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:41 functional-787602 crio[840]: time="2025-12-05T06:31:41.855651494Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=89806364-b4d1-47f3-b8a6-03a172403c88 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:41 functional-787602 crio[840]: time="2025-12-05T06:31:41.89159769Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d0085262-8130-4f96-9e4a-2235f21ac349 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:41 functional-787602 crio[840]: time="2025-12-05T06:31:41.891933899Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=d0085262-8130-4f96-9e4a-2235f21ac349 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:41 functional-787602 crio[840]: time="2025-12-05T06:31:41.89201157Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=d0085262-8130-4f96-9e4a-2235f21ac349 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:50 functional-787602 crio[840]: time="2025-12-05T06:31:50.676791108Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=68025f9e-37a0-43f6-b382-f51f02708b14 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:50 functional-787602 crio[840]: time="2025-12-05T06:31:50.680138341Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0f166518-f948-4df0-8b06-222091c65745 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:50 functional-787602 crio[840]: time="2025-12-05T06:31:50.681537792Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e5ab0747-c64a-4c27-bb68-dc37b5d7d2d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:50 functional-787602 crio[840]: time="2025-12-05T06:31:50.683032891Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=db4b1015-02f2-4b33-ae53-b1ec55f3939e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:50 functional-787602 crio[840]: time="2025-12-05T06:31:50.683836921Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=9a2d60bd-9949-4c8c-9a78-04e189a0de8b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:50 functional-787602 crio[840]: time="2025-12-05T06:31:50.685268832Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=00766a4f-d665-47ab-952f-21be1c785055 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:31:50 functional-787602 crio[840]: time="2025-12-05T06:31:50.686181196Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f0479b0c-7827-427c-bc2d-1a7f1484a0ba name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:35:56 functional-787602 crio[840]: time="2025-12-05T06:35:56.04676682Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=13171a49-7ff0-4763-a80d-10faef0ec9c2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:35:56 functional-787602 crio[840]: time="2025-12-05T06:35:56.048522542Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=7c08d241-70cc-4942-9f42-0763934178ab name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:35:56 functional-787602 crio[840]: time="2025-12-05T06:35:56.050231617Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5cb325b0-2515-4c2b-a315-355d4ddaf17d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:35:56 functional-787602 crio[840]: time="2025-12-05T06:35:56.051947142Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0d3c8a54-2ba3-4ce1-a28a-6a48b057e798 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:35:56 functional-787602 crio[840]: time="2025-12-05T06:35:56.052919446Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bdcd86c8-74ae-4a4c-bc47-4fa63d42d937 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:35:56 functional-787602 crio[840]: time="2025-12-05T06:35:56.05451492Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=8f1261cb-7ad3-4441-a1ee-d5134e7a1e7a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:35:56 functional-787602 crio[840]: time="2025-12-05T06:35:56.055450841Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5fcd3838-72bc-4a40-869a-d8acc64d5282 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:39:58.967405    5630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:58.968158    5630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:58.969740    5630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:58.970274    5630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:39:58.971866    5630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:39:59 up  3:22,  0 user,  load average: 0.21, 0.24, 0.62
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:39:56 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:39:56 functional-787602 kubelet[5437]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:39:56 functional-787602 kubelet[5437]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:39:56 functional-787602 kubelet[5437]: E1205 06:39:56.820597    5437 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:39:56 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:39:56 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:39:57 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 05 06:39:57 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:39:57 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:39:57 functional-787602 kubelet[5443]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:39:57 functional-787602 kubelet[5443]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:39:57 functional-787602 kubelet[5443]: E1205 06:39:57.588425    5443 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:39:57 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:39:57 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:39:58 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 05 06:39:58 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:39:58 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:39:58 functional-787602 kubelet[5542]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:39:58 functional-787602 kubelet[5542]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:39:58 functional-787602 kubelet[5542]: E1205 06:39:58.311804    5542 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:39:58 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:39:58 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:39:59 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 05 06:39:59 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:39:59 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 6 (340.88284ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 06:39:59.465056  480041 status.go:458] kubeconfig endpoint: get endpoint: "functional-787602" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (509.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1205 06:39:59.481134  444147 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787602 --alsologtostderr -v=8
E1205 06:40:39.247020  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:41:06.953022  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:43:45.391821  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:45:39.247034  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787602 --alsologtostderr -v=8: exit status 80 (6m6.819046913s)

                                                
                                                
-- stdout --
	* [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:39:59.523609  480112 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:39:59.523793  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.523816  480112 out.go:374] Setting ErrFile to fd 2...
	I1205 06:39:59.523837  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.524220  480112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:39:59.524681  480112 out.go:368] Setting JSON to false
	I1205 06:39:59.525943  480112 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12127,"bootTime":1764904673,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:39:59.526021  480112 start.go:143] virtualization:  
	I1205 06:39:59.529485  480112 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:39:59.533299  480112 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:39:59.533430  480112 notify.go:221] Checking for updates...
	I1205 06:39:59.539032  480112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:39:59.542038  480112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:39:59.544821  480112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:39:59.547558  480112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:39:59.550303  480112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:39:59.553653  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:39:59.553793  480112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:39:59.587101  480112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:39:59.587209  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.647016  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.637315829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.647121  480112 docker.go:319] overlay module found
	I1205 06:39:59.650323  480112 out.go:179] * Using the docker driver based on existing profile
	I1205 06:39:59.653400  480112 start.go:309] selected driver: docker
	I1205 06:39:59.653426  480112 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.653516  480112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:39:59.653622  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.713012  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.702941112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.713548  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:39:59.713621  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:39:59.713678  480112 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.716888  480112 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:39:59.719675  480112 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:39:59.722682  480112 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:39:59.725781  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:39:59.725946  480112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:39:59.745247  480112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:39:59.745269  480112 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:39:59.798316  480112 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:40:00.046313  480112 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:40:00.046504  480112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:40:00.046814  480112 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:40:00.046857  480112 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.046933  480112 start.go:364] duration metric: took 43.709µs to acquireMachinesLock for "functional-787602"
	I1205 06:40:00.046950  480112 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:40:00.046969  480112 fix.go:54] fixHost starting: 
	I1205 06:40:00.047287  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:00.049366  480112 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049539  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:40:00.049567  480112 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 224.085µs
	I1205 06:40:00.049597  480112 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:40:00.049636  480112 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049702  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:40:00.049722  480112 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 89.733µs
	I1205 06:40:00.050277  480112 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:40:00.050353  480112 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051458  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:40:00.051500  480112 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.155091ms
	I1205 06:40:00.051529  480112 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051582  480112 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051659  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:40:00.051680  480112 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 114.34µs
	I1205 06:40:00.051702  480112 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051741  480112 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051791  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:40:00.051822  480112 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 83.054µs
	I1205 06:40:00.063751  480112 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064371  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:40:00.064388  480112 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 658.756µs
	I1205 06:40:00.064400  480112 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:40:00.064453  480112 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064504  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:40:00.064510  480112 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 92.349µs
	I1205 06:40:00.064516  480112 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:40:00.064532  480112 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.067976  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:40:00.068029  480112 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 3.495239ms
	I1205 06:40:00.068074  480112 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:40:00.058631  480112 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:40:00.068155  480112 cache.go:87] Successfully saved all images to host disk.
	I1205 06:40:00.156134  480112 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:40:00.156177  480112 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:40:00.160840  480112 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:40:00.160889  480112 machine.go:94] provisionDockerMachine start ...
	I1205 06:40:00.161003  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.232523  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.232876  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.232886  480112 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:40:00.484459  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.484485  480112 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:40:00.484571  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.540991  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.541328  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.541341  480112 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:40:00.761314  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.761404  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.782315  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.782666  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.782689  480112 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:40:00.934901  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:40:00.934930  480112 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:40:00.935005  480112 ubuntu.go:190] setting up certificates
	I1205 06:40:00.935016  480112 provision.go:84] configureAuth start
	I1205 06:40:00.935097  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:00.952439  480112 provision.go:143] copyHostCerts
	I1205 06:40:00.952486  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952527  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:40:00.952543  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952619  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:40:00.952705  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952727  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:40:00.952737  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952765  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:40:00.952809  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952828  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:40:00.952837  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952861  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:40:00.952911  480112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:40:01.160028  480112 provision.go:177] copyRemoteCerts
	I1205 06:40:01.160150  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:40:01.160201  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.184354  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.295740  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:40:01.295812  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:40:01.316925  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:40:01.316986  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:40:01.339507  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:40:01.339574  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:40:01.358710  480112 provision.go:87] duration metric: took 423.67042ms to configureAuth
	I1205 06:40:01.358788  480112 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:40:01.358981  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:01.359104  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.377010  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:01.377340  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:01.377360  480112 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:40:01.723262  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:40:01.723303  480112 machine.go:97] duration metric: took 1.56238873s to provisionDockerMachine
	I1205 06:40:01.723316  480112 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:40:01.723329  480112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:40:01.723398  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:40:01.723446  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.742177  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.847102  480112 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:40:01.850854  480112 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:40:01.850880  480112 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:40:01.850885  480112 command_runner.go:130] > VERSION_ID="12"
	I1205 06:40:01.850889  480112 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:40:01.850897  480112 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:40:01.850901  480112 command_runner.go:130] > ID=debian
	I1205 06:40:01.850906  480112 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:40:01.850910  480112 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:40:01.850918  480112 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:40:01.850955  480112 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:40:01.850978  480112 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:40:01.850990  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:40:01.851049  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:40:01.851138  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:40:01.851149  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /etc/ssl/certs/4441472.pem
	I1205 06:40:01.851230  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:40:01.851237  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> /etc/test/nested/copy/444147/hosts
	I1205 06:40:01.851282  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:40:01.859516  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:01.879483  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:40:01.898655  480112 start.go:296] duration metric: took 175.324245ms for postStartSetup
	I1205 06:40:01.898744  480112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:40:01.898799  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.917838  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.020238  480112 command_runner.go:130] > 18%
	I1205 06:40:02.020354  480112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:40:02.025815  480112 command_runner.go:130] > 160G
	I1205 06:40:02.026493  480112 fix.go:56] duration metric: took 1.979519007s for fixHost
	I1205 06:40:02.026516  480112 start.go:83] releasing machines lock for "functional-787602", held for 1.979574696s
	I1205 06:40:02.026587  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:02.046979  480112 ssh_runner.go:195] Run: cat /version.json
	I1205 06:40:02.047030  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.047280  480112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:40:02.047345  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.081102  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.085747  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.189932  480112 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:40:02.190072  480112 ssh_runner.go:195] Run: systemctl --version
	I1205 06:40:02.280062  480112 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:40:02.282950  480112 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:40:02.282989  480112 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:40:02.283061  480112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:40:02.319896  480112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:40:02.324212  480112 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:40:02.324374  480112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:40:02.324444  480112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:40:02.332670  480112 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:40:02.332736  480112 start.go:496] detecting cgroup driver to use...
	I1205 06:40:02.332774  480112 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:40:02.332831  480112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:40:02.348502  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:40:02.361851  480112 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:40:02.361926  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:40:02.380602  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:40:02.393710  480112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:40:02.522109  480112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:40:02.655884  480112 docker.go:234] disabling docker service ...
	I1205 06:40:02.655958  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:40:02.673330  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:40:02.687649  480112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:40:02.802223  480112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:40:02.930343  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:40:02.944017  480112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:40:02.956898  480112 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 06:40:02.958122  480112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:40:02.958248  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.967567  480112 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:40:02.967712  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.976781  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.985897  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.994984  480112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:40:03.003975  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.013874  480112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.022919  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.032163  480112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:40:03.038816  480112 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:40:03.039990  480112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:40:03.049427  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.175291  480112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:40:03.341374  480112 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:40:03.341477  480112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:40:03.345425  480112 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 06:40:03.345448  480112 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:40:03.345464  480112 command_runner.go:130] > Device: 0,73	Inode: 1755        Links: 1
	I1205 06:40:03.345472  480112 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:03.345477  480112 command_runner.go:130] > Access: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345484  480112 command_runner.go:130] > Modify: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345489  480112 command_runner.go:130] > Change: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345493  480112 command_runner.go:130] >  Birth: -
	I1205 06:40:03.345525  480112 start.go:564] Will wait 60s for crictl version
	I1205 06:40:03.345579  480112 ssh_runner.go:195] Run: which crictl
	I1205 06:40:03.348931  480112 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:40:03.349401  480112 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:40:03.373825  480112 command_runner.go:130] > Version:  0.1.0
	I1205 06:40:03.373849  480112 command_runner.go:130] > RuntimeName:  cri-o
	I1205 06:40:03.373973  480112 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1205 06:40:03.374159  480112 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:40:03.376168  480112 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:40:03.376252  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.403613  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.403690  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.403710  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.403727  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.403756  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.403777  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.403795  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.403813  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.403844  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.403865  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.403879  480112 command_runner.go:130] >      static
	I1205 06:40:03.403895  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.403924  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.403945  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.403964  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.403979  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.404006  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.404027  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.404044  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.404059  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.406234  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.432776  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.432811  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.432836  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.432843  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.432849  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.432862  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.432872  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.432877  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.432886  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.432908  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.432916  480112 command_runner.go:130] >      static
	I1205 06:40:03.432920  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.432948  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.432956  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.432959  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.432963  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.432970  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.432998  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.433006  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.433010  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.440242  480112 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:40:03.443151  480112 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:40:03.459691  480112 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:40:03.463610  480112 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:40:03.463748  480112 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:40:03.463853  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:40:03.463910  480112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:40:03.497207  480112 command_runner.go:130] > {
	I1205 06:40:03.497226  480112 command_runner.go:130] >   "images":  [
	I1205 06:40:03.497231  480112 command_runner.go:130] >     {
	I1205 06:40:03.497239  480112 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:40:03.497244  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497250  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:40:03.497253  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497257  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497267  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1205 06:40:03.497271  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497276  480112 command_runner.go:130] >       "size":  "29035622",
	I1205 06:40:03.497279  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497283  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497286  480112 command_runner.go:130] >     },
	I1205 06:40:03.497290  480112 command_runner.go:130] >     {
	I1205 06:40:03.497297  480112 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:40:03.497301  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497306  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:40:03.497309  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497313  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497321  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1205 06:40:03.497324  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497328  480112 command_runner.go:130] >       "size":  "74488375",
	I1205 06:40:03.497332  480112 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:40:03.497336  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497340  480112 command_runner.go:130] >     },
	I1205 06:40:03.497343  480112 command_runner.go:130] >     {
	I1205 06:40:03.497350  480112 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:40:03.497354  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497359  480112 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:40:03.497362  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497366  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497388  480112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1205 06:40:03.497393  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497397  480112 command_runner.go:130] >       "size":  "60854229",
	I1205 06:40:03.497401  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497405  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497409  480112 command_runner.go:130] >       },
	I1205 06:40:03.497413  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497417  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497421  480112 command_runner.go:130] >     },
	I1205 06:40:03.497424  480112 command_runner.go:130] >     {
	I1205 06:40:03.497430  480112 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:40:03.497434  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497439  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:40:03.497442  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497446  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497454  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1205 06:40:03.497459  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497463  480112 command_runner.go:130] >       "size":  "84947242",
	I1205 06:40:03.497466  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497469  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497473  480112 command_runner.go:130] >       },
	I1205 06:40:03.497476  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497480  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497483  480112 command_runner.go:130] >     },
	I1205 06:40:03.497486  480112 command_runner.go:130] >     {
	I1205 06:40:03.497492  480112 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:40:03.497496  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497501  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:40:03.497505  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497509  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497517  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1205 06:40:03.497520  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497529  480112 command_runner.go:130] >       "size":  "72167568",
	I1205 06:40:03.497539  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497542  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497545  480112 command_runner.go:130] >       },
	I1205 06:40:03.497549  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497552  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497555  480112 command_runner.go:130] >     },
	I1205 06:40:03.497558  480112 command_runner.go:130] >     {
	I1205 06:40:03.497564  480112 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:40:03.497568  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497573  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:40:03.497575  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497579  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497588  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1205 06:40:03.497592  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497595  480112 command_runner.go:130] >       "size":  "74105124",
	I1205 06:40:03.497599  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497603  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497606  480112 command_runner.go:130] >     },
	I1205 06:40:03.497609  480112 command_runner.go:130] >     {
	I1205 06:40:03.497615  480112 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:40:03.497618  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497624  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:40:03.497627  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497630  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497638  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1205 06:40:03.497641  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497645  480112 command_runner.go:130] >       "size":  "49819792",
	I1205 06:40:03.497648  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497652  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497655  480112 command_runner.go:130] >       },
	I1205 06:40:03.497659  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497663  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497666  480112 command_runner.go:130] >     },
	I1205 06:40:03.497672  480112 command_runner.go:130] >     {
	I1205 06:40:03.497679  480112 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:40:03.497683  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497687  480112 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.497690  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497694  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497701  480112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1205 06:40:03.497705  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497708  480112 command_runner.go:130] >       "size":  "517328",
	I1205 06:40:03.497712  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497715  480112 command_runner.go:130] >         "value":  "65535"
	I1205 06:40:03.497718  480112 command_runner.go:130] >       },
	I1205 06:40:03.497722  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497726  480112 command_runner.go:130] >       "pinned":  true
	I1205 06:40:03.497729  480112 command_runner.go:130] >     }
	I1205 06:40:03.497732  480112 command_runner.go:130] >   ]
	I1205 06:40:03.497735  480112 command_runner.go:130] > }
	I1205 06:40:03.499390  480112 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:40:03.499408  480112 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:40:03.499417  480112 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:40:03.499515  480112 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:40:03.499587  480112 ssh_runner.go:195] Run: crio config
	I1205 06:40:03.548638  480112 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 06:40:03.548661  480112 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 06:40:03.548669  480112 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 06:40:03.548671  480112 command_runner.go:130] > #
	I1205 06:40:03.548686  480112 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 06:40:03.548693  480112 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 06:40:03.548700  480112 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 06:40:03.548716  480112 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 06:40:03.548720  480112 command_runner.go:130] > # reload'.
	I1205 06:40:03.548726  480112 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 06:40:03.548733  480112 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 06:40:03.548739  480112 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 06:40:03.548745  480112 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 06:40:03.548748  480112 command_runner.go:130] > [crio]
	I1205 06:40:03.548755  480112 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 06:40:03.548760  480112 command_runner.go:130] > # containers images, in this directory.
	I1205 06:40:03.549179  480112 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 06:40:03.549226  480112 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 06:40:03.549246  480112 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1205 06:40:03.549268  480112 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 06:40:03.549287  480112 command_runner.go:130] > # imagestore = ""
	I1205 06:40:03.549306  480112 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 06:40:03.549324  480112 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 06:40:03.549341  480112 command_runner.go:130] > # storage_driver = "overlay"
	I1205 06:40:03.549356  480112 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 06:40:03.549385  480112 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 06:40:03.549402  480112 command_runner.go:130] > # storage_option = [
	I1205 06:40:03.549417  480112 command_runner.go:130] > # ]
	I1205 06:40:03.549435  480112 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 06:40:03.549461  480112 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 06:40:03.549487  480112 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 06:40:03.549504  480112 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 06:40:03.549521  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 06:40:03.549545  480112 command_runner.go:130] > # always happen on a node reboot
	I1205 06:40:03.549737  480112 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 06:40:03.549768  480112 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 06:40:03.549775  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 06:40:03.549781  480112 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 06:40:03.549785  480112 command_runner.go:130] > # version_file_persist = ""
	I1205 06:40:03.549793  480112 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 06:40:03.549801  480112 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 06:40:03.549805  480112 command_runner.go:130] > # internal_wipe = true
	I1205 06:40:03.549813  480112 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 06:40:03.549818  480112 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 06:40:03.549822  480112 command_runner.go:130] > # internal_repair = true
	I1205 06:40:03.549828  480112 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 06:40:03.549834  480112 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 06:40:03.549840  480112 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 06:40:03.549845  480112 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 06:40:03.549854  480112 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 06:40:03.549858  480112 command_runner.go:130] > [crio.api]
	I1205 06:40:03.549863  480112 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 06:40:03.549867  480112 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 06:40:03.549872  480112 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 06:40:03.549876  480112 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 06:40:03.549883  480112 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 06:40:03.549889  480112 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 06:40:03.549892  480112 command_runner.go:130] > # stream_port = "0"
	I1205 06:40:03.549897  480112 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 06:40:03.549901  480112 command_runner.go:130] > # stream_enable_tls = false
	I1205 06:40:03.549907  480112 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 06:40:03.549911  480112 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 06:40:03.549917  480112 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 06:40:03.549923  480112 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549927  480112 command_runner.go:130] > # stream_tls_cert = ""
	I1205 06:40:03.549933  480112 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 06:40:03.549939  480112 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549942  480112 command_runner.go:130] > # stream_tls_key = ""
	I1205 06:40:03.549948  480112 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 06:40:03.549954  480112 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 06:40:03.549958  480112 command_runner.go:130] > # automatically pick up the changes.
	I1205 06:40:03.549962  480112 command_runner.go:130] > # stream_tls_ca = ""
	I1205 06:40:03.549979  480112 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549984  480112 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 06:40:03.549991  480112 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549996  480112 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 06:40:03.550002  480112 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 06:40:03.550007  480112 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 06:40:03.550010  480112 command_runner.go:130] > [crio.runtime]
	I1205 06:40:03.550016  480112 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 06:40:03.550021  480112 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 06:40:03.550025  480112 command_runner.go:130] > # "nofile=1024:2048"
	I1205 06:40:03.550034  480112 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 06:40:03.550038  480112 command_runner.go:130] > # default_ulimits = [
	I1205 06:40:03.550041  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550047  480112 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 06:40:03.550050  480112 command_runner.go:130] > # no_pivot = false
	I1205 06:40:03.550056  480112 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 06:40:03.550062  480112 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 06:40:03.550067  480112 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 06:40:03.550072  480112 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 06:40:03.550077  480112 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 06:40:03.550084  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550087  480112 command_runner.go:130] > # conmon = ""
	I1205 06:40:03.550092  480112 command_runner.go:130] > # Cgroup setting for conmon
	I1205 06:40:03.550099  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 06:40:03.550102  480112 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 06:40:03.550108  480112 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 06:40:03.550115  480112 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 06:40:03.550124  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550128  480112 command_runner.go:130] > # conmon_env = [
	I1205 06:40:03.550130  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550136  480112 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 06:40:03.550141  480112 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 06:40:03.550146  480112 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 06:40:03.550150  480112 command_runner.go:130] > # default_env = [
	I1205 06:40:03.550152  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550158  480112 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 06:40:03.550165  480112 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 06:40:03.550169  480112 command_runner.go:130] > # selinux = false
	I1205 06:40:03.550180  480112 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 06:40:03.550188  480112 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1205 06:40:03.550193  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550197  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.550202  480112 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1205 06:40:03.550212  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550216  480112 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1205 06:40:03.550223  480112 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 06:40:03.550229  480112 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 06:40:03.550235  480112 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 06:40:03.550241  480112 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 06:40:03.550246  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550250  480112 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 06:40:03.550255  480112 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 06:40:03.550259  480112 command_runner.go:130] > # the cgroup blockio controller.
	I1205 06:40:03.550263  480112 command_runner.go:130] > # blockio_config_file = ""
	I1205 06:40:03.550269  480112 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 06:40:03.550273  480112 command_runner.go:130] > # blockio parameters.
	I1205 06:40:03.550277  480112 command_runner.go:130] > # blockio_reload = false
	I1205 06:40:03.550284  480112 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 06:40:03.550287  480112 command_runner.go:130] > # irqbalance daemon.
	I1205 06:40:03.550292  480112 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 06:40:03.550298  480112 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 06:40:03.550305  480112 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 06:40:03.550313  480112 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 06:40:03.550319  480112 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 06:40:03.550325  480112 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 06:40:03.550330  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550333  480112 command_runner.go:130] > # rdt_config_file = ""
	I1205 06:40:03.550338  480112 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 06:40:03.550342  480112 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 06:40:03.550348  480112 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 06:40:03.550711  480112 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 06:40:03.550724  480112 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 06:40:03.550731  480112 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 06:40:03.550734  480112 command_runner.go:130] > # will be added.
	I1205 06:40:03.550738  480112 command_runner.go:130] > # default_capabilities = [
	I1205 06:40:03.550742  480112 command_runner.go:130] > # 	"CHOWN",
	I1205 06:40:03.550746  480112 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 06:40:03.550749  480112 command_runner.go:130] > # 	"FSETID",
	I1205 06:40:03.550752  480112 command_runner.go:130] > # 	"FOWNER",
	I1205 06:40:03.550756  480112 command_runner.go:130] > # 	"SETGID",
	I1205 06:40:03.550759  480112 command_runner.go:130] > # 	"SETUID",
	I1205 06:40:03.550782  480112 command_runner.go:130] > # 	"SETPCAP",
	I1205 06:40:03.550786  480112 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 06:40:03.550789  480112 command_runner.go:130] > # 	"KILL",
	I1205 06:40:03.550792  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550800  480112 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 06:40:03.550810  480112 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 06:40:03.550815  480112 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 06:40:03.550821  480112 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 06:40:03.550827  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550831  480112 command_runner.go:130] > default_sysctls = [
	I1205 06:40:03.550835  480112 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 06:40:03.550838  480112 command_runner.go:130] > ]
	I1205 06:40:03.550842  480112 command_runner.go:130] > # List of devices on the host that a
	I1205 06:40:03.550849  480112 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 06:40:03.550852  480112 command_runner.go:130] > # allowed_devices = [
	I1205 06:40:03.550856  480112 command_runner.go:130] > # 	"/dev/fuse",
	I1205 06:40:03.550859  480112 command_runner.go:130] > # 	"/dev/net/tun",
	I1205 06:40:03.550863  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550867  480112 command_runner.go:130] > # List of additional devices. specified as
	I1205 06:40:03.550875  480112 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 06:40:03.550880  480112 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 06:40:03.550886  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550889  480112 command_runner.go:130] > # additional_devices = [
	I1205 06:40:03.550894  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550899  480112 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 06:40:03.550905  480112 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 06:40:03.550909  480112 command_runner.go:130] > # 	"/etc/cdi",
	I1205 06:40:03.550912  480112 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 06:40:03.550915  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550921  480112 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 06:40:03.550927  480112 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 06:40:03.550931  480112 command_runner.go:130] > # Defaults to false.
	I1205 06:40:03.550936  480112 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 06:40:03.550942  480112 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 06:40:03.550949  480112 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 06:40:03.550952  480112 command_runner.go:130] > # hooks_dir = [
	I1205 06:40:03.550956  480112 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 06:40:03.550962  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550972  480112 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 06:40:03.550979  480112 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 06:40:03.550984  480112 command_runner.go:130] > # its default mounts from the following two files:
	I1205 06:40:03.550987  480112 command_runner.go:130] > #
	I1205 06:40:03.550993  480112 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 06:40:03.550999  480112 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 06:40:03.551004  480112 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 06:40:03.551007  480112 command_runner.go:130] > #
	I1205 06:40:03.551013  480112 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 06:40:03.551019  480112 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 06:40:03.551025  480112 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 06:40:03.551030  480112 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 06:40:03.551032  480112 command_runner.go:130] > #
	I1205 06:40:03.551036  480112 command_runner.go:130] > # default_mounts_file = ""
	I1205 06:40:03.551041  480112 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 06:40:03.551047  480112 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 06:40:03.551051  480112 command_runner.go:130] > # pids_limit = -1
	I1205 06:40:03.551057  480112 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 06:40:03.551063  480112 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 06:40:03.551069  480112 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 06:40:03.551077  480112 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 06:40:03.551080  480112 command_runner.go:130] > # log_size_max = -1
	I1205 06:40:03.551087  480112 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 06:40:03.551091  480112 command_runner.go:130] > # log_to_journald = false
	I1205 06:40:03.551098  480112 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 06:40:03.551103  480112 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 06:40:03.551108  480112 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 06:40:03.551113  480112 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 06:40:03.551118  480112 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 06:40:03.551121  480112 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 06:40:03.551127  480112 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 06:40:03.551131  480112 command_runner.go:130] > # read_only = false
	I1205 06:40:03.551137  480112 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 06:40:03.551147  480112 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 06:40:03.551151  480112 command_runner.go:130] > # live configuration reload.
	I1205 06:40:03.551154  480112 command_runner.go:130] > # log_level = "info"
	I1205 06:40:03.551160  480112 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 06:40:03.551164  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.551168  480112 command_runner.go:130] > # log_filter = ""
	I1205 06:40:03.551174  480112 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551180  480112 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 06:40:03.551184  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551192  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551196  480112 command_runner.go:130] > # uid_mappings = ""
	I1205 06:40:03.551201  480112 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551208  480112 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 06:40:03.551212  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551219  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551223  480112 command_runner.go:130] > # gid_mappings = ""
	I1205 06:40:03.551229  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 06:40:03.551235  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551241  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551249  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551253  480112 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 06:40:03.551259  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 06:40:03.551264  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551271  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551278  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551282  480112 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 06:40:03.551288  480112 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 06:40:03.551296  480112 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 06:40:03.551302  480112 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 06:40:03.551306  480112 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 06:40:03.551311  480112 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 06:40:03.551317  480112 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 06:40:03.551322  480112 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 06:40:03.551330  480112 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 06:40:03.551333  480112 command_runner.go:130] > # drop_infra_ctr = true
	I1205 06:40:03.551340  480112 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 06:40:03.551346  480112 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 06:40:03.551353  480112 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 06:40:03.551357  480112 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 06:40:03.551364  480112 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 06:40:03.551370  480112 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 06:40:03.551375  480112 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 06:40:03.551380  480112 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 06:40:03.551384  480112 command_runner.go:130] > # shared_cpuset = ""
	I1205 06:40:03.551390  480112 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 06:40:03.551395  480112 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 06:40:03.551398  480112 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 06:40:03.551405  480112 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 06:40:03.551408  480112 command_runner.go:130] > # pinns_path = ""
	I1205 06:40:03.551414  480112 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 06:40:03.551420  480112 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 06:40:03.551424  480112 command_runner.go:130] > # enable_criu_support = true
	I1205 06:40:03.551428  480112 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 06:40:03.551434  480112 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 06:40:03.551438  480112 command_runner.go:130] > # enable_pod_events = false
	I1205 06:40:03.551444  480112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 06:40:03.551449  480112 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 06:40:03.551453  480112 command_runner.go:130] > # default_runtime = "crun"
	I1205 06:40:03.551458  480112 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 06:40:03.551466  480112 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 06:40:03.551475  480112 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 06:40:03.551480  480112 command_runner.go:130] > # creation as a file is not desired either.
	I1205 06:40:03.551488  480112 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 06:40:03.551495  480112 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 06:40:03.551499  480112 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 06:40:03.551502  480112 command_runner.go:130] > # ]
	I1205 06:40:03.551511  480112 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 06:40:03.551518  480112 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 06:40:03.551524  480112 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 06:40:03.551528  480112 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 06:40:03.551532  480112 command_runner.go:130] > #
	I1205 06:40:03.551536  480112 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 06:40:03.551541  480112 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 06:40:03.551544  480112 command_runner.go:130] > # runtime_type = "oci"
	I1205 06:40:03.551549  480112 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 06:40:03.551553  480112 command_runner.go:130] > # inherit_default_runtime = false
	I1205 06:40:03.551558  480112 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 06:40:03.551562  480112 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 06:40:03.551566  480112 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 06:40:03.551570  480112 command_runner.go:130] > # monitor_env = []
	I1205 06:40:03.551574  480112 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 06:40:03.551578  480112 command_runner.go:130] > # allowed_annotations = []
	I1205 06:40:03.551583  480112 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 06:40:03.551587  480112 command_runner.go:130] > # no_sync_log = false
	I1205 06:40:03.551590  480112 command_runner.go:130] > # default_annotations = {}
	I1205 06:40:03.551594  480112 command_runner.go:130] > # stream_websockets = false
	I1205 06:40:03.551598  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.551631  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.551636  480112 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 06:40:03.551643  480112 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 06:40:03.551649  480112 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 06:40:03.551656  480112 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 06:40:03.551659  480112 command_runner.go:130] > #   in $PATH.
	I1205 06:40:03.551665  480112 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 06:40:03.551669  480112 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 06:40:03.551675  480112 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 06:40:03.551678  480112 command_runner.go:130] > #   state.
	I1205 06:40:03.551685  480112 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 06:40:03.551690  480112 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 06:40:03.551699  480112 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1205 06:40:03.551706  480112 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1205 06:40:03.551711  480112 command_runner.go:130] > #   the values from the default runtime on load time.
	I1205 06:40:03.551717  480112 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 06:40:03.551723  480112 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 06:40:03.551730  480112 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 06:40:03.551736  480112 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 06:40:03.551740  480112 command_runner.go:130] > #   The currently recognized values are:
	I1205 06:40:03.551747  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 06:40:03.551754  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 06:40:03.551761  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 06:40:03.551767  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 06:40:03.551774  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 06:40:03.551781  480112 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 06:40:03.551788  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 06:40:03.551794  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 06:40:03.551800  480112 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 06:40:03.551807  480112 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1205 06:40:03.551813  480112 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1205 06:40:03.551819  480112 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1205 06:40:03.551828  480112 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1205 06:40:03.551834  480112 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1205 06:40:03.551840  480112 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1205 06:40:03.551848  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1205 06:40:03.551854  480112 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 06:40:03.551858  480112 command_runner.go:130] > #   deprecated option "conmon".
	I1205 06:40:03.551865  480112 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 06:40:03.551870  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 06:40:03.551877  480112 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 06:40:03.551882  480112 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 06:40:03.551888  480112 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1205 06:40:03.551893  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 06:40:03.551900  480112 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1205 06:40:03.551907  480112 command_runner.go:130] > #   conmon-rs by using:
	I1205 06:40:03.551915  480112 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1205 06:40:03.551924  480112 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1205 06:40:03.551931  480112 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1205 06:40:03.551937  480112 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 06:40:03.551943  480112 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 06:40:03.551950  480112 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1205 06:40:03.551958  480112 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1205 06:40:03.551964  480112 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1205 06:40:03.551971  480112 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1205 06:40:03.551979  480112 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1205 06:40:03.551983  480112 command_runner.go:130] > #   when a machine crash happens.
	I1205 06:40:03.551990  480112 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1205 06:40:03.551997  480112 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1205 06:40:03.552005  480112 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1205 06:40:03.552009  480112 command_runner.go:130] > #   seccomp profile for the runtime.
	I1205 06:40:03.552015  480112 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1205 06:40:03.552022  480112 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1205 06:40:03.552025  480112 command_runner.go:130] > #
	I1205 06:40:03.552029  480112 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 06:40:03.552032  480112 command_runner.go:130] > #
	I1205 06:40:03.552038  480112 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 06:40:03.552044  480112 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 06:40:03.552046  480112 command_runner.go:130] > #
	I1205 06:40:03.552053  480112 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 06:40:03.552058  480112 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 06:40:03.552061  480112 command_runner.go:130] > #
	I1205 06:40:03.552067  480112 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 06:40:03.552070  480112 command_runner.go:130] > # feature.
	I1205 06:40:03.552072  480112 command_runner.go:130] > #
	I1205 06:40:03.552078  480112 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 06:40:03.552085  480112 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 06:40:03.552090  480112 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 06:40:03.552104  480112 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 06:40:03.552111  480112 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 06:40:03.552114  480112 command_runner.go:130] > #
	I1205 06:40:03.552121  480112 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 06:40:03.552127  480112 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 06:40:03.552129  480112 command_runner.go:130] > #
	I1205 06:40:03.552135  480112 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 06:40:03.552141  480112 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 06:40:03.552144  480112 command_runner.go:130] > #
	I1205 06:40:03.552150  480112 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 06:40:03.552156  480112 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 06:40:03.552159  480112 command_runner.go:130] > # limitation.
	I1205 06:40:03.552163  480112 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1205 06:40:03.552167  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1205 06:40:03.552170  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552174  480112 command_runner.go:130] > runtime_root = "/run/crun"
	I1205 06:40:03.552178  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552182  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552188  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552193  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552197  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552200  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552204  480112 command_runner.go:130] > allowed_annotations = [
	I1205 06:40:03.552208  480112 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1205 06:40:03.552211  480112 command_runner.go:130] > ]
	I1205 06:40:03.552215  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552219  480112 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 06:40:03.552223  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1205 06:40:03.552226  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552230  480112 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 06:40:03.552234  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552237  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552241  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552248  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552252  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552256  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552260  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552267  480112 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 06:40:03.552272  480112 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 06:40:03.552278  480112 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 06:40:03.552286  480112 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 06:40:03.552300  480112 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1205 06:40:03.552310  480112 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1205 06:40:03.552319  480112 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1205 06:40:03.552324  480112 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 06:40:03.552334  480112 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 06:40:03.552342  480112 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 06:40:03.552349  480112 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 06:40:03.552356  480112 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 06:40:03.552359  480112 command_runner.go:130] > # Example:
	I1205 06:40:03.552364  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 06:40:03.552368  480112 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 06:40:03.552373  480112 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 06:40:03.552382  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 06:40:03.552385  480112 command_runner.go:130] > # cpuset = "0-1"
	I1205 06:40:03.552389  480112 command_runner.go:130] > # cpushares = "5"
	I1205 06:40:03.552392  480112 command_runner.go:130] > # cpuquota = "1000"
	I1205 06:40:03.552396  480112 command_runner.go:130] > # cpuperiod = "100000"
	I1205 06:40:03.552399  480112 command_runner.go:130] > # cpulimit = "35"
	I1205 06:40:03.552402  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.552406  480112 command_runner.go:130] > # The workload name is workload-type.
	I1205 06:40:03.552413  480112 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 06:40:03.552419  480112 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 06:40:03.552424  480112 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 06:40:03.552432  480112 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 06:40:03.552438  480112 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 06:40:03.552445  480112 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 06:40:03.552452  480112 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 06:40:03.552456  480112 command_runner.go:130] > # Default value is set to true
	I1205 06:40:03.552461  480112 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 06:40:03.552466  480112 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 06:40:03.552471  480112 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 06:40:03.552475  480112 command_runner.go:130] > # Default value is set to 'false'
	I1205 06:40:03.552479  480112 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 06:40:03.552484  480112 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1205 06:40:03.552492  480112 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1205 06:40:03.552495  480112 command_runner.go:130] > # timezone = ""
	I1205 06:40:03.552502  480112 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 06:40:03.552504  480112 command_runner.go:130] > #
	I1205 06:40:03.552510  480112 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 06:40:03.552517  480112 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1205 06:40:03.552520  480112 command_runner.go:130] > [crio.image]
	I1205 06:40:03.552526  480112 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 06:40:03.552530  480112 command_runner.go:130] > # default_transport = "docker://"
	I1205 06:40:03.552536  480112 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 06:40:03.552543  480112 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552547  480112 command_runner.go:130] > # global_auth_file = ""
	I1205 06:40:03.552552  480112 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 06:40:03.552557  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552561  480112 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.552568  480112 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 06:40:03.552574  480112 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552581  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552585  480112 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 06:40:03.552591  480112 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 06:40:03.552597  480112 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 06:40:03.552603  480112 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 06:40:03.552608  480112 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 06:40:03.552612  480112 command_runner.go:130] > # pause_command = "/pause"
	I1205 06:40:03.552622  480112 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 06:40:03.552628  480112 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 06:40:03.552641  480112 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 06:40:03.552646  480112 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 06:40:03.552652  480112 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 06:40:03.552658  480112 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 06:40:03.552661  480112 command_runner.go:130] > # pinned_images = [
	I1205 06:40:03.552664  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552670  480112 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 06:40:03.552675  480112 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 06:40:03.552681  480112 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 06:40:03.552687  480112 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 06:40:03.552692  480112 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 06:40:03.552697  480112 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1205 06:40:03.552702  480112 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 06:40:03.552708  480112 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 06:40:03.552716  480112 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 06:40:03.552722  480112 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 06:40:03.552728  480112 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 06:40:03.552733  480112 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 06:40:03.552738  480112 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 06:40:03.552746  480112 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 06:40:03.552749  480112 command_runner.go:130] > # changing them here.
	I1205 06:40:03.552755  480112 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1205 06:40:03.552758  480112 command_runner.go:130] > # insecure_registries = [
	I1205 06:40:03.552761  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552767  480112 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 06:40:03.552772  480112 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 06:40:03.552776  480112 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 06:40:03.552780  480112 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 06:40:03.553031  480112 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 06:40:03.553083  480112 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1205 06:40:03.553106  480112 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1205 06:40:03.553125  480112 command_runner.go:130] > # auto_reload_registries = false
	I1205 06:40:03.553145  480112 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1205 06:40:03.553166  480112 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1205 06:40:03.553207  480112 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1205 06:40:03.553227  480112 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1205 06:40:03.553245  480112 command_runner.go:130] > # The mode of short name resolution.
	I1205 06:40:03.553268  480112 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1205 06:40:03.553288  480112 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1205 06:40:03.553305  480112 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1205 06:40:03.553320  480112 command_runner.go:130] > # short_name_mode = "enforcing"
	I1205 06:40:03.553338  480112 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1205 06:40:03.553365  480112 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1205 06:40:03.553538  480112 command_runner.go:130] > # oci_artifact_mount_support = true
	I1205 06:40:03.553551  480112 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 06:40:03.553555  480112 command_runner.go:130] > # CNI plugins.
	I1205 06:40:03.553559  480112 command_runner.go:130] > [crio.network]
	I1205 06:40:03.553564  480112 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 06:40:03.553570  480112 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 06:40:03.553574  480112 command_runner.go:130] > # cni_default_network = ""
	I1205 06:40:03.553580  480112 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 06:40:03.553587  480112 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 06:40:03.553592  480112 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 06:40:03.553597  480112 command_runner.go:130] > # plugin_dirs = [
	I1205 06:40:03.553600  480112 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 06:40:03.553603  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553607  480112 command_runner.go:130] > # List of included pod metrics.
	I1205 06:40:03.553616  480112 command_runner.go:130] > # included_pod_metrics = [
	I1205 06:40:03.553620  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553625  480112 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 06:40:03.553628  480112 command_runner.go:130] > [crio.metrics]
	I1205 06:40:03.553634  480112 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 06:40:03.553637  480112 command_runner.go:130] > # enable_metrics = false
	I1205 06:40:03.553641  480112 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 06:40:03.553646  480112 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 06:40:03.553655  480112 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 06:40:03.553661  480112 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 06:40:03.553670  480112 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 06:40:03.553675  480112 command_runner.go:130] > # metrics_collectors = [
	I1205 06:40:03.553679  480112 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 06:40:03.553683  480112 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 06:40:03.553687  480112 command_runner.go:130] > # 	"containers_oom_total",
	I1205 06:40:03.553691  480112 command_runner.go:130] > # 	"processes_defunct",
	I1205 06:40:03.553695  480112 command_runner.go:130] > # 	"operations_total",
	I1205 06:40:03.553699  480112 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 06:40:03.553703  480112 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 06:40:03.553707  480112 command_runner.go:130] > # 	"operations_errors_total",
	I1205 06:40:03.553711  480112 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 06:40:03.553715  480112 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 06:40:03.553719  480112 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 06:40:03.553723  480112 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 06:40:03.553727  480112 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 06:40:03.553731  480112 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 06:40:03.553736  480112 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 06:40:03.553740  480112 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 06:40:03.553744  480112 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1205 06:40:03.553747  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553753  480112 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1205 06:40:03.553758  480112 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1205 06:40:03.553763  480112 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 06:40:03.553767  480112 command_runner.go:130] > # metrics_port = 9090
	I1205 06:40:03.553772  480112 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 06:40:03.553775  480112 command_runner.go:130] > # metrics_socket = ""
	I1205 06:40:03.553780  480112 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 06:40:03.553786  480112 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 06:40:03.553792  480112 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 06:40:03.553798  480112 command_runner.go:130] > # certificate on any modification event.
	I1205 06:40:03.553802  480112 command_runner.go:130] > # metrics_cert = ""
	I1205 06:40:03.553807  480112 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 06:40:03.553812  480112 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 06:40:03.553822  480112 command_runner.go:130] > # metrics_key = ""
	I1205 06:40:03.553828  480112 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 06:40:03.553831  480112 command_runner.go:130] > [crio.tracing]
	I1205 06:40:03.553836  480112 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 06:40:03.553841  480112 command_runner.go:130] > # enable_tracing = false
	I1205 06:40:03.553846  480112 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 06:40:03.553850  480112 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1205 06:40:03.553857  480112 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 06:40:03.553861  480112 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 06:40:03.553865  480112 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 06:40:03.553868  480112 command_runner.go:130] > [crio.nri]
	I1205 06:40:03.553872  480112 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 06:40:03.553876  480112 command_runner.go:130] > # enable_nri = true
	I1205 06:40:03.553880  480112 command_runner.go:130] > # NRI socket to listen on.
	I1205 06:40:03.553884  480112 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 06:40:03.553888  480112 command_runner.go:130] > # NRI plugin directory to use.
	I1205 06:40:03.553893  480112 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 06:40:03.553898  480112 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 06:40:03.553902  480112 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 06:40:03.553908  480112 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 06:40:03.553979  480112 command_runner.go:130] > # nri_disable_connections = false
	I1205 06:40:03.553985  480112 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 06:40:03.553990  480112 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 06:40:03.553995  480112 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 06:40:03.554000  480112 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 06:40:03.554004  480112 command_runner.go:130] > # NRI default validator configuration.
	I1205 06:40:03.554011  480112 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1205 06:40:03.554017  480112 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1205 06:40:03.554021  480112 command_runner.go:130] > # can be restricted/rejected:
	I1205 06:40:03.554025  480112 command_runner.go:130] > # - OCI hook injection
	I1205 06:40:03.554030  480112 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1205 06:40:03.554035  480112 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1205 06:40:03.554039  480112 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1205 06:40:03.554047  480112 command_runner.go:130] > # - adjustment of linux namespaces
	I1205 06:40:03.554054  480112 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1205 06:40:03.554060  480112 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1205 06:40:03.554066  480112 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1205 06:40:03.554070  480112 command_runner.go:130] > #
	I1205 06:40:03.554075  480112 command_runner.go:130] > # [crio.nri.default_validator]
	I1205 06:40:03.554079  480112 command_runner.go:130] > # nri_enable_default_validator = false
	I1205 06:40:03.554084  480112 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1205 06:40:03.554090  480112 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1205 06:40:03.554095  480112 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1205 06:40:03.554101  480112 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1205 06:40:03.554106  480112 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1205 06:40:03.554110  480112 command_runner.go:130] > # nri_validator_required_plugins = [
	I1205 06:40:03.554113  480112 command_runner.go:130] > # ]
	I1205 06:40:03.554118  480112 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1205 06:40:03.554124  480112 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 06:40:03.554127  480112 command_runner.go:130] > [crio.stats]
	I1205 06:40:03.554133  480112 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 06:40:03.554138  480112 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 06:40:03.554142  480112 command_runner.go:130] > # stats_collection_period = 0
	I1205 06:40:03.554148  480112 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1205 06:40:03.554154  480112 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1205 06:40:03.554158  480112 command_runner.go:130] > # collection_period = 0
	I1205 06:40:03.556162  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527241832Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1205 06:40:03.556207  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527278608Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1205 06:40:03.556230  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527308122Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1205 06:40:03.556255  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.52733264Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1205 06:40:03.556280  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527409367Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.556295  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527814951Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1205 06:40:03.556306  480112 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 06:40:03.556383  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:40:03.556397  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:40:03.556420  480112 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:40:03.556447  480112 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:40:03.556582  480112 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:40:03.556659  480112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:40:03.563611  480112 command_runner.go:130] > kubeadm
	I1205 06:40:03.563630  480112 command_runner.go:130] > kubectl
	I1205 06:40:03.563636  480112 command_runner.go:130] > kubelet
	I1205 06:40:03.564590  480112 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:40:03.564681  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:40:03.572146  480112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:40:03.584914  480112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:40:03.598402  480112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 06:40:03.610806  480112 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:40:03.614247  480112 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:40:03.614336  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.749526  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:04.526831  480112 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:40:04.526920  480112 certs.go:195] generating shared ca certs ...
	I1205 06:40:04.526970  480112 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:04.527146  480112 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:40:04.527262  480112 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:40:04.527298  480112 certs.go:257] generating profile certs ...
	I1205 06:40:04.527454  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:40:04.527572  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:40:04.527654  480112 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:40:04.527683  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:40:04.527717  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:40:04.527750  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:40:04.527779  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:40:04.527812  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:40:04.527845  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:40:04.527901  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:40:04.527942  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:40:04.528018  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:40:04.528084  480112 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:40:04.528110  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:40:04.528175  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:40:04.528223  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:40:04.528266  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:40:04.528351  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:04.528416  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.528448  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem -> /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.528484  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.529122  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:40:04.549434  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:40:04.568942  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:40:04.588032  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:40:04.616779  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:40:04.636137  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:40:04.655504  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:40:04.673755  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:40:04.692822  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:40:04.711199  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:40:04.730794  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:40:04.748559  480112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:40:04.762229  480112 ssh_runner.go:195] Run: openssl version
	I1205 06:40:04.768327  480112 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:40:04.768697  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.776287  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:40:04.784133  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788189  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788221  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788277  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.829541  480112 command_runner.go:130] > b5213941
	I1205 06:40:04.829985  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:40:04.837884  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.845797  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:40:04.853974  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.857841  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858230  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858295  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.900152  480112 command_runner.go:130] > 51391683
	I1205 06:40:04.900696  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:40:04.908660  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.916381  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:40:04.924345  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928449  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928489  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928538  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.969475  480112 command_runner.go:130] > 3ec20f2e
	I1205 06:40:04.969979  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:40:04.977627  480112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981676  480112 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981703  480112 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:40:04.981710  480112 command_runner.go:130] > Device: 259,1	Inode: 1046940     Links: 1
	I1205 06:40:04.981717  480112 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:04.981724  480112 command_runner.go:130] > Access: 2025-12-05 06:35:56.052204819 +0000
	I1205 06:40:04.981729  480112 command_runner.go:130] > Modify: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981735  480112 command_runner.go:130] > Change: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981741  480112 command_runner.go:130] >  Birth: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981812  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:40:05.025511  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.026281  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:40:05.067472  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.067923  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:40:05.109199  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.110439  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:40:05.151291  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.151789  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:40:05.192630  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.193112  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:40:05.234917  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.235493  480112 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:40:05.235576  480112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:40:05.235658  480112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:40:05.274773  480112 cri.go:89] found id: ""
	I1205 06:40:05.274854  480112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:40:05.284543  480112 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:40:05.284569  480112 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:40:05.284576  480112 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:40:05.284587  480112 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:40:05.284593  480112 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:40:05.284641  480112 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:40:05.293745  480112 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:40:05.294169  480112 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-787602" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.294277  480112 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-441321/kubeconfig needs updating (will repair): [kubeconfig missing "functional-787602" cluster setting kubeconfig missing "functional-787602" context setting]
	I1205 06:40:05.294658  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.295082  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.295239  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.295723  480112 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:40:05.295760  480112 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:40:05.295766  480112 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:40:05.295771  480112 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:40:05.295779  480112 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:40:05.296148  480112 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:40:05.296228  480112 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:40:05.305058  480112 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:40:05.305103  480112 kubeadm.go:602] duration metric: took 20.504477ms to restartPrimaryControlPlane
	I1205 06:40:05.305113  480112 kubeadm.go:403] duration metric: took 69.632192ms to StartCluster
	I1205 06:40:05.305127  480112 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305185  480112 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.305773  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305969  480112 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:40:05.306285  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:05.306340  480112 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:40:05.306433  480112 addons.go:70] Setting storage-provisioner=true in profile "functional-787602"
	I1205 06:40:05.306448  480112 addons.go:239] Setting addon storage-provisioner=true in "functional-787602"
	I1205 06:40:05.306452  480112 addons.go:70] Setting default-storageclass=true in profile "functional-787602"
	I1205 06:40:05.306473  480112 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-787602"
	I1205 06:40:05.306480  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.306771  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.306997  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.310651  480112 out.go:179] * Verifying Kubernetes components...
	I1205 06:40:05.313979  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:05.339795  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.340007  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.340282  480112 addons.go:239] Setting addon default-storageclass=true in "functional-787602"
	I1205 06:40:05.340312  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.340728  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.361959  480112 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:40:05.364893  480112 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.364921  480112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:40:05.364987  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.384451  480112 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:05.384479  480112 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:40:05.384563  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.411372  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.432092  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.510112  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:05.550609  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.557147  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.275527  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275618  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275677  480112 retry.go:31] will retry after 247.926554ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275753  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275786  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275814  480112 retry.go:31] will retry after 139.276641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275869  480112 node_ready.go:35] waiting up to 6m0s for node "functional-787602" to be "Ready" ...
	I1205 06:40:06.275986  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.276069  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.276382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.415646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.474935  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.474981  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.475001  480112 retry.go:31] will retry after 366.421161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.524197  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.584795  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.584843  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.584873  480112 retry.go:31] will retry after 312.76439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.776120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.776655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.841962  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.898526  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.904086  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.904127  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.904149  480112 retry.go:31] will retry after 740.273906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.959857  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.963461  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.963497  480112 retry.go:31] will retry after 759.965783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.276975  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.277072  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.277469  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.645230  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:07.705790  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.705833  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.705854  480112 retry.go:31] will retry after 642.466008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.724048  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:07.776045  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.776157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.791584  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.795338  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.795382  480112 retry.go:31] will retry after 614.279076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:08.276605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:08.348828  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:08.405271  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.408500  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.408576  480112 retry.go:31] will retry after 1.343995427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.410740  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:08.473489  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.473541  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.473564  480112 retry.go:31] will retry after 1.078913702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.777094  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.777222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.777651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.276356  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.276453  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.276780  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.553646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:09.614016  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.614089  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.614116  480112 retry.go:31] will retry after 2.379780781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.753405  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:09.777031  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.777132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.777482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.813171  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.813239  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.813272  480112 retry.go:31] will retry after 1.978465808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:10.276816  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.277257  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:10.277348  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:10.776020  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.776102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.776363  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.276081  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.276155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.791876  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:11.850961  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:11.851011  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.851047  480112 retry.go:31] will retry after 1.715194365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.994161  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:12.058032  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:12.058079  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.058098  480112 retry.go:31] will retry after 2.989540966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.276377  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.276451  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.276701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:12.776111  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:12.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:13.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.276532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:13.567026  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:13.620219  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:13.623514  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.623554  480112 retry.go:31] will retry after 5.458226005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.776806  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.776876  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.777207  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.277043  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.277126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.277411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:14.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:15.048089  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:15.111053  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:15.111091  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.111112  480112 retry.go:31] will retry after 5.631155228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.276375  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.276443  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:15.776648  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.777039  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.276857  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.276930  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.776968  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.777300  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:16.777347  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:17.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.276495  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:17.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.276064  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.276137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.276439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:19.082075  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:19.143244  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:19.143293  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.143314  480112 retry.go:31] will retry after 4.646546475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.276638  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.276712  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.277087  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:19.277141  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:19.776926  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.777007  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.777341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.743196  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:20.776726  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.776805  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.777070  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.801108  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:20.801144  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:20.801162  480112 retry.go:31] will retry after 9.136671028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:21.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.276973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.277268  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:21.277311  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:21.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.276249  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.776313  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.776619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.276136  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.776172  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.776265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:23.776664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:23.790980  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:23.852305  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:23.852351  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:23.852373  480112 retry.go:31] will retry after 4.852638111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:24.276878  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.276951  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.277225  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:24.776145  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.276631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.776628  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.776885  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:25.776924  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:26.276685  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.276766  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.277101  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:26.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.777045  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.777350  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.277008  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.277082  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.776144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:28.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:28.276571  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:28.705256  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:28.766465  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:28.766519  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.766541  480112 retry.go:31] will retry after 15.718503653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.777014  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.276890  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.276967  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.277333  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.776501  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.776578  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.776920  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.938493  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:30.002212  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:30.002257  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.002277  480112 retry.go:31] will retry after 5.082732051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.276542  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.276613  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.276880  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:30.276935  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:30.776666  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.777100  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.276768  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.276846  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.277194  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.776934  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.777009  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.777271  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.276395  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.276491  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.276813  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.776164  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.776245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:32.776649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:33.776151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.276378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.776431  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.776497  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.776750  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:34.776788  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:35.085301  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:35.148531  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:35.152882  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.152918  480112 retry.go:31] will retry after 11.086200752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.276603  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:35.777106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.777182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.777443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:37.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.276271  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:37.276633  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:37.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.776452  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.276021  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.276100  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.276361  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.776193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:39.776575  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:40.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:40.776012  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.776078  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.276540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.776119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.776197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:42.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:42.276631  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:42.776277  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.776365  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.776691  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.776091  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.276566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.485984  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:44.554072  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:44.557893  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.557927  480112 retry.go:31] will retry after 22.628614414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.776735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:44.776781  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:45.276131  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:45.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.239320  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:46.276723  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.277080  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.296820  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:46.296888  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.296909  480112 retry.go:31] will retry after 16.475007469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.776108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.776261  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:47.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.276550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:47.276621  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.276087  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.276413  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.776153  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:49.276252  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:49.276666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:49.776457  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.776539  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.776814  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.276579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.776648  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.276069  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:51.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:52.276278  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.276689  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:52.776334  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.776409  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.776733  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.276508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:54.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.276515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:54.276568  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:54.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.776530  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.776853  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.276690  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.276781  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.277125  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.777039  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.777119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.777385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.276092  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.276176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.276480  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:56.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:57.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:57.776222  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.776307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.776615  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.276119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.776243  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.776317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:58.776608  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:59.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.276178  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:59.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.776216  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.776291  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.776616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:00.776689  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:01.276381  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.276456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.276781  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:01.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.772181  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:02.776748  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.776818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.777092  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:02.777132  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:02.828748  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:02.831873  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:02.831907  480112 retry.go:31] will retry after 23.767145255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:03.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.276184  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:03.776136  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.276224  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.276300  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.776644  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.776715  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.777004  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:05.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.276924  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.277214  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:05.277261  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:05.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.776532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.276212  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.776196  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.776601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:07.187370  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:07.246801  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:07.246844  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.246863  480112 retry.go:31] will retry after 35.018877023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.277002  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.277431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:07.277488  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:07.777040  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.777122  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.777377  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.776342  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.776663  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.276083  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.276449  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.776565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:09.776619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:10.276305  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.276400  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.276764  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:10.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.776250  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.776174  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:12.276067  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.276164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.276478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:12.276527  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:12.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.776235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.776538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.776688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:14.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.276498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:14.276544  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:14.776236  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.776308  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.776593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.276150  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.276414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.276430  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.776438  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:16.776478  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:17.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.276289  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.276578  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:17.776257  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.776333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.776671  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.276301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.276556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.776250  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.776326  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.776636  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:18.776691  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:19.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.276800  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:19.776580  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.776660  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.276771  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.276848  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.277227  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.777060  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.777195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.777544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:20.777604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:21.276075  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.276451  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:21.776144  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.276241  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.276600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.776229  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.776301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.776581  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:23.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.276514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:23.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:23.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:25.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.276273  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:25.276662  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:25.776023  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.776090  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.776414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.276503  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.599995  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:26.657664  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660860  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660976  480112 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:26.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.776182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.276161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.276457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:27.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:28.776230  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.776304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.776618  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.276325  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.276412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.276735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.776649  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.777083  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:29.777135  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:30.276977  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.277054  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.277385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:30.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.776171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.276794  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.276886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.277179  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.776939  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.777016  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.777293  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:31.777332  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:32.276045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:32.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.276086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:34.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.276702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:34.276756  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:34.776367  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.776450  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.776713  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.276380  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.276460  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.276788  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.776775  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.776849  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.777195  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:36.276774  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.276844  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.277103  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:36.277142  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:36.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.777059  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.777387  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.276088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.276166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:38.776611  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:39.276263  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.276598  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:39.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.776292  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.776600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:40.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:41.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:41.776294  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.776370  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.776711  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.266330  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:42.277622  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.277694  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.277960  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.360709  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361696  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361795  480112 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:42.365007  480112 out.go:179] * Enabled addons: 
	I1205 06:41:42.368666  480112 addons.go:530] duration metric: took 1m37.062317768s for enable addons: enabled=[]
	I1205 06:41:42.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:43.276187  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.276263  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.276622  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:43.276733  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:43.776170  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.776430  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.776531  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.776876  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:45.276870  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.277032  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.277745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:45.277837  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:45.776642  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.776716  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.777050  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.276852  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.276927  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.277279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.777045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.777126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.777396  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.776088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.776166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.776536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:47.776604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:48.276249  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.276340  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.276655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:48.776130  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.776156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.776445  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:50.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.276229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:50.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:50.776285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.776369  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.776728  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.276429  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.276528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.276792  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.776116  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.276150  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.776085  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:52.776560  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:53.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.276685  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:53.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.776608  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.276706  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.776729  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.776832  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.777167  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:54.777215  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:55.276948  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.277018  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:55.776048  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.776114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.776379  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.776084  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:57.276044  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.276370  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:57.276409  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:57.776074  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.776175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.776534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.276474  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.776435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:59.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.276461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:59.276501  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:59.776413  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.776495  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.776828  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.276523  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.276611  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.276928  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.776709  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.776788  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.777104  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:01.276864  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.276950  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.277320  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:01.277377  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:01.776897  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.776970  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.777279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.276047  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.276127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.776476  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:03.277070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.277155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.277407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:03.277449  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:03.776150  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.776586  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.276303  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.276387  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.276681  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.776711  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.776794  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.782759  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:42:05.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.276619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:05.776366  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.776442  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.776784  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:05.776836  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:06.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:06.776160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.776573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.276332  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.276414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.276772  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.776264  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.776337  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.776591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:08.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.276230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:08.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:08.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.776787  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.276425  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.776266  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:10.276403  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.276479  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.276767  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:10.276814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:10.776451  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.776520  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.776795  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.276636  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.276714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.277054  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.776915  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.776994  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.777329  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.276040  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.276407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.776109  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:12.776597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:13.277062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.277174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.277498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:13.776187  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.776262  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.276253  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.276688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.776496  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.776570  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.776890  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:14.776948  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:15.276665  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.276733  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.277042  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:15.776898  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.777305  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.276026  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.276107  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.776492  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:17.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.276185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:17.276576  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:17.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.776357  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.776675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.276109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:19.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:19.276619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:19.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.776483  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.776740  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.776163  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.776556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.276156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.776270  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:21.776630  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:22.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:22.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.776267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.276270  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.276346  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.276675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.776093  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:24.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:24.276482  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:24.776113  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.776187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.776592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:26.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:26.276597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:26.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.776223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.776559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.276255  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.276327  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.776352  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.776694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.776441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:28.776496  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:29.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.276214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:29.776193  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.776294  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.776633  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.276354  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.276958  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.776754  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.776886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.777216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:30.777271  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:31.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.276997  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.277353  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:31.776905  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.777239  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.276031  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.276129  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:33.276005  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.276326  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:33.276364  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:33.776056  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.776130  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.276252  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.276601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.776170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.776439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:35.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.276502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:35.276548  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:35.776412  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.776485  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.776805  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.276468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:37.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:37.276578  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:37.776073  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.776140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.276191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.276447  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.776154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.776231  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:39.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:40.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.776168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.776483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.776311  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.776412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:41.776800  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:42.276460  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.276533  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.276835  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:42.776147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.276274  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.776371  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:44.276399  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.276475  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.276774  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:44.276818  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:44.776823  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.776896  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.777260  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.277015  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.277165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.277467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.276290  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.276372  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.276755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.776351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.776696  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:46.776865  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:47.276162  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.276562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:47.776400  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.776503  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.276644  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.276723  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.276978  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.776820  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.776899  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.777234  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:48.777287  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:49.277045  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.277135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.277475  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:49.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.776484  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.276116  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.276446  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.776127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.776478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:51.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:51.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.776200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.276504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.776090  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.776470  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.276544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:53.776655  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:54.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:54.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.776393  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.776463  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.776718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:55.776760  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:56.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:56.776279  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.776355  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.276419  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.776121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:58.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:58.276716  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:58.776027  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.776099  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.776349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.276501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.776817  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.776902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.777233  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:00.277352  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.277456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.277768  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:00.277814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:00.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.776654  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.776901  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.776971  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.777244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.277063  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.277162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.277496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:02.776546  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:03.276099  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:03.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.276644  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.776637  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.776900  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:04.776951  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:05.276709  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:05.776977  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.777064  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.777431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.776100  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.776494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:07.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:07.276607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:07.776246  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.776316  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.276194  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.276266  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.276528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.776304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.776699  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:09.776757  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:10.276472  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.276560  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.276905  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:10.776647  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.776717  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.776986  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.276873  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.277209  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.777023  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.777098  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.777457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:11.777510  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:12.276089  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.276172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:12.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.276276  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.276351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.276678  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.776139  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:14.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.276507  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:14.276551  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:14.776223  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.776621  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.276486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:16.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:16.276663  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:16.776324  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.776396  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.776758  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.276146  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.776575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.276431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:18.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:19.276304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.276385  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.276748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:19.776687  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.776760  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.777008  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.276923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.277244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.777028  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.777103  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.777448  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:20.777499  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:21.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:21.776177  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.776259  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.776596  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.276311  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.276394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.276742  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.776321  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.776716  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:23.276454  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.276541  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.276962  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:23.277021  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:23.776836  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.776923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.777277  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.277022  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.277402  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.776322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.276424  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.276715  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.776566  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.776639  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.776913  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:25.776962  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:26.276795  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.276868  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.277314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:26.776043  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.776120  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.276458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:28.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.276267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:28.276648  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:28.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.776457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.276201  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.776229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.776584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:30.776585  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:31.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.276684  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:31.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.776434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.776375  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.776708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:32.776765  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.276143  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.276404  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:33.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.276299  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.276386  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.276745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.776389  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:35.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:35.276620  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:35.776302  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.776730  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.276513  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.776244  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.776582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.776217  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:37.776588  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:38.276170  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.276253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.276591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:38.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.776702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.276158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.776295  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.776693  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:39.776750  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.276217  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.276537  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:40.776079  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.776268  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.776641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:42.276112  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.276194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:42.276522  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:42.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.776576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.276319  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.276422  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.276770  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.776459  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.776529  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.776862  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:44.276624  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.276703  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.277019  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:44.277073  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:44.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.776964  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.777314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.276046  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.276131  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.276394  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.776391  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.776465  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.276420  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.276883  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.776662  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.776730  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.776998  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:46.777043  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:47.276766  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.276837  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.277173  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:47.776961  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.777038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.777378  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.277033  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.277382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.776065  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.776137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.776471  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:49.276093  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:49.276562  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:49.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.276235  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.276637  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.276152  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.276423  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:51.776605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:52.276271  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.276672  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:52.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.776729  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.276218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.776158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.776239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:54.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:54.276664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:54.776326  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.776403  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.776723  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.276353  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.276436  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.276747  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.776688  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.776759  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.777015  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:56.276831  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.276902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.277216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:56.277268  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:56.776881  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.776955  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.777297  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.276004  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.276075  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.276309  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.776346  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.776677  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:58.776715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:59.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.276197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:59.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.776524  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.776869  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.276462  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.276555  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.276854  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.776547  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.776618  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.776940  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:00.776993  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:01.276491  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.276575  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.276927  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:01.776492  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.776564  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.776833  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.276163  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.276584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.776165  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:03.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:03.276467  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:03.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.276109  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.776096  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:05.276414  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.276498  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.276881  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:05.276923  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:05.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.776194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.276351  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.276427  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.276786  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.276368  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.276703  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:07.776509  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:08.276197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.276613  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:08.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.776428  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.776751  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.276440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.776287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.776623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:09.776680  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:10.276196  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.276274  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.276577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:10.776259  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.776330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.776668  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.276219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.776276  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.776353  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.776679  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:11.776729  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:12.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:12.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.776183  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.276219  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.276630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.776971  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.777044  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.777316  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:13.777359  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:14.276035  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.276110  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:14.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.776211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.776155  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.776242  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.776630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:16.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.276312  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:16.276712  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:16.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.776158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.276169  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.776291  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.776701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.276007  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.276319  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.776002  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.776084  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:18.776517  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:19.276181  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.276257  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:19.776049  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.776119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.776371  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:20.776581  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:21.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.276174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:21.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.776659  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.276365  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.276438  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.276776  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.776225  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.776811  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:22.776918  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:23.276742  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.276818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.277175  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:23.777069  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.777161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.777559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.276175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.776173  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:25.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.276345  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.276694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:25.276749  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:25.776415  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.776487  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.776789  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.276167  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.776128  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:27.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:28.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.776254  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.776579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.276542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.776485  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.776594  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.776923  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:29.776981  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:30.276085  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:30.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.776542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.276152  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.276609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:32.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:32.276593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:32.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.776213  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.776635  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:34.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:34.276654  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:34.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.776308  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.776391  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.776737  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.276100  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.276170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.276424  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.776558  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:36.776623  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:37.276145  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:37.776081  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.776159  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.276595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.776175  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.776258  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:38.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:39.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.276167  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:39.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.276186  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.276264  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.276597  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.776207  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.776284  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:41.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.276518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:41.276567  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:41.776214  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.776290  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.776631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.276210  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.276285  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:43.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:43.276715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:43.776156  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.776564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.276236  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.276330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.276658  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.776699  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.776770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.777048  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:45.276733  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.276817  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.277141  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:45.277194  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:45.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.776557  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.276336  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.276649  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.776440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.276545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.776314  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:47.776761  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:48.276091  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:48.776114  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.276135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.276211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.276541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.776067  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.776138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.776462  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:50.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:50.276626  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:50.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.776099  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.776010  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.776087  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.776341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:52.776389  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:53.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.276192  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:53.776263  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.776669  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.276617  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.776724  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:54.777107  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:55.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.276974  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.277307  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:55.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.776673  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.276245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.776700  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:57.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.276411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:57.276450  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:57.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.276629  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.776188  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:59.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:59.276596  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:59.776249  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.776665  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.276382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.276785  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.776579  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.776666  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.777193  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.776510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:01.776573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:02.276144  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:02.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.776588  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.276207  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.276642  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.776382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.776473  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.776816  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:03.776873  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:04.276616  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.276687  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.276947  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:04.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.776956  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.777296  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.276052  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.276135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.276493  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.776076  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.776382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:06.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:06.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:06.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.776318  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.776607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.276647  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.776505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:08.276124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.276525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:08.276582  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:08.776068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.776427  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.776450  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.776528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.776851  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:10.276606  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.276677  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.277000  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:10.277057  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:10.776657  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.776732  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.276811  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.276882  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.277223  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.776851  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.776931  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.777196  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:12.276964  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.277038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.277388  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:12.277445  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:12.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.776553  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.276228  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.276604  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.776145  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.776458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:14.776508  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:15.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.276200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.276794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:15.776653  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.776744  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.777091  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.276715  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.276782  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.277064  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.776929  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.777011  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.777376  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:16.777433  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:17.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.276186  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.276483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:17.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:19.276077  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.276409  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:19.276457  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:19.776140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.276265  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.276339  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.276676  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.776280  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.776606  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:21.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.276210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:21.276636  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:21.776310  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.776390  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.776682  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.276070  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.276144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:23.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.276321  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.276652  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:23.276714  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:23.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.776500  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.276572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.776624  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.776995  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:25.276720  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.276795  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.277096  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:25.277138  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:25.777018  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.777094  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.777486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.776075  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.776258  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.776680  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:27.776737  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:28.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.276623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:28.776331  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.776707  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.276416  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.276493  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.276818  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.776639  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.776714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.776980  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:29.777029  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:30.276781  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.276856  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.277201  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:30.776871  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.776952  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.777288  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.277017  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.277360  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.776747  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.776819  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.777132  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:31.777186  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:32.276950  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.277023  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.277345  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:32.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.776177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.776473  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.276576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.776686  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:34.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.276462  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.276731  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:34.276780  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:34.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.776596  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.776911  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.276784  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.276862  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.277181  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.776969  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.777301  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:36.277066  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.277146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.277501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:36.277569  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:36.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.776185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.276163  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:38.776483  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:39.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.276555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:39.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.776488  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.776826  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.276588  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.276663  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.276937  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.776794  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.776875  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.777212  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:40.777264  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:41.277035  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.277114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.277443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:41.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.776176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.276209  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.276666  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.776161  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.776562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:43.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:43.276647  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:43.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.276530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.776148  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:45.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.276708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:45.276763  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:45.776444  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.776519  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.776847  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.276602  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.276676  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.276921  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.776673  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.776790  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.777114  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:47.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:47.277302  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:47.776975  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.777051  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.777338  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.276041  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.276118  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.276410  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.776030  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.776109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.776395  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.276033  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.276104  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.276393  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:49.776593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:50.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:50.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.776209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:52.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:52.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:52.776182  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.776256  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.776572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.776498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:54.276193  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.276592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:54.276649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:54.776395  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.776470  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.276132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.276389  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.776213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:56.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.276656  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:56.276710  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:56.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.776281  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.776602  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.276123  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.276534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.776290  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.776381  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.776755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.276054  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.276133  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.276434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:58.776554  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:59.276223  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:59.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.276689  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.276784  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.277182  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.776974  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.777053  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.777397  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:00.777455  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:01.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.276181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:01.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.276247  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.276641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:03.276061  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.276138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:03.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:03.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.276265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.776433  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.776505  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.776849  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:05.276666  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.276770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:05.277147  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:05.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:06.276074  480112 node_ready.go:38] duration metric: took 6m0.000169865s for node "functional-787602" to be "Ready" ...
	I1205 06:46:06.279558  480112 out.go:203] 
	W1205 06:46:06.282535  480112 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:46:06.282557  480112 out.go:285] * 
	* 
	W1205 06:46:06.284719  480112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:46:06.287525  480112 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-787602 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m7.39657807s for "functional-787602" cluster.
I1205 06:46:06.877776  444147 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (330.452461ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-787602 logs -n 25: (1.012932703s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-252233 image save kicbase/echo-server:functional-252233 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image rm kicbase/echo-server:functional-252233 --alsologtostderr                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls                                                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/test/nested/copy/444147/hosts                                                                                         │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image save --daemon kicbase/echo-server:functional-252233 --alsologtostderr                                                             │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/444147.pem                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /usr/share/ca-certificates/444147.pem                                                                                      │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/4441472.pem                                                                                                 │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /usr/share/ca-certificates/4441472.pem                                                                                     │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format short --alsologtostderr                                                                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh pgrep buildkitd                                                                                                                     │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ image          │ functional-252233 image ls --format yaml --alsologtostderr                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr                                                    │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format json --alsologtostderr                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format table --alsologtostderr                                                                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls                                                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ delete         │ -p functional-252233                                                                                                                                      │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ start          │ -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ start          │ -p functional-787602 --alsologtostderr -v=8                                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:39 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:39:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:39:59.523609  480112 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:39:59.523793  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.523816  480112 out.go:374] Setting ErrFile to fd 2...
	I1205 06:39:59.523837  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.524220  480112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:39:59.524681  480112 out.go:368] Setting JSON to false
	I1205 06:39:59.525943  480112 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12127,"bootTime":1764904673,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:39:59.526021  480112 start.go:143] virtualization:  
	I1205 06:39:59.529485  480112 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:39:59.533299  480112 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:39:59.533430  480112 notify.go:221] Checking for updates...
	I1205 06:39:59.539032  480112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:39:59.542038  480112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:39:59.544821  480112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:39:59.547558  480112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:39:59.550303  480112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:39:59.553653  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:39:59.553793  480112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:39:59.587101  480112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:39:59.587209  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.647016  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.637315829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.647121  480112 docker.go:319] overlay module found
	I1205 06:39:59.650323  480112 out.go:179] * Using the docker driver based on existing profile
	I1205 06:39:59.653400  480112 start.go:309] selected driver: docker
	I1205 06:39:59.653426  480112 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.653516  480112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:39:59.653622  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.713012  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.702941112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.713548  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:39:59.713621  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:39:59.713678  480112 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.716888  480112 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:39:59.719675  480112 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:39:59.722682  480112 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:39:59.725781  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:39:59.725946  480112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:39:59.745247  480112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:39:59.745269  480112 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:39:59.798316  480112 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:40:00.046313  480112 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:40:00.046504  480112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:40:00.046814  480112 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:40:00.046857  480112 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.046933  480112 start.go:364] duration metric: took 43.709µs to acquireMachinesLock for "functional-787602"
	I1205 06:40:00.046950  480112 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:40:00.046969  480112 fix.go:54] fixHost starting: 
	I1205 06:40:00.047287  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:00.049366  480112 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049539  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:40:00.049567  480112 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 224.085µs
	I1205 06:40:00.049597  480112 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:40:00.049636  480112 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049702  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:40:00.049722  480112 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 89.733µs
	I1205 06:40:00.050277  480112 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:40:00.050353  480112 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051458  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:40:00.051500  480112 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.155091ms
	I1205 06:40:00.051529  480112 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051582  480112 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051659  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:40:00.051680  480112 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 114.34µs
	I1205 06:40:00.051702  480112 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051741  480112 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051791  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:40:00.051822  480112 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 83.054µs
	I1205 06:40:00.063751  480112 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064371  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:40:00.064388  480112 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 658.756µs
	I1205 06:40:00.064400  480112 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:40:00.064453  480112 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064504  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:40:00.064510  480112 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 92.349µs
	I1205 06:40:00.064516  480112 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:40:00.064532  480112 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.067976  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:40:00.068029  480112 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 3.495239ms
	I1205 06:40:00.068074  480112 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:40:00.058631  480112 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:40:00.068155  480112 cache.go:87] Successfully saved all images to host disk.
	I1205 06:40:00.156134  480112 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:40:00.156177  480112 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:40:00.160840  480112 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:40:00.160889  480112 machine.go:94] provisionDockerMachine start ...
	I1205 06:40:00.161003  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.232523  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.232876  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.232886  480112 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:40:00.484459  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.484485  480112 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:40:00.484571  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.540991  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.541328  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.541341  480112 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:40:00.761314  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.761404  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.782315  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.782666  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.782689  480112 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:40:00.934901  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:40:00.934930  480112 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:40:00.935005  480112 ubuntu.go:190] setting up certificates
	I1205 06:40:00.935016  480112 provision.go:84] configureAuth start
	I1205 06:40:00.935097  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:00.952439  480112 provision.go:143] copyHostCerts
	I1205 06:40:00.952486  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952527  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:40:00.952543  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952619  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:40:00.952705  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952727  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:40:00.952737  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952765  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:40:00.952809  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952828  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:40:00.952837  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952861  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:40:00.952911  480112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:40:01.160028  480112 provision.go:177] copyRemoteCerts
	I1205 06:40:01.160150  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:40:01.160201  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.184354  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.295740  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:40:01.295812  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:40:01.316925  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:40:01.316986  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:40:01.339507  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:40:01.339574  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:40:01.358710  480112 provision.go:87] duration metric: took 423.67042ms to configureAuth
	I1205 06:40:01.358788  480112 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:40:01.358981  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:01.359104  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.377010  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:01.377340  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:01.377360  480112 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:40:01.723262  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:40:01.723303  480112 machine.go:97] duration metric: took 1.56238873s to provisionDockerMachine
	I1205 06:40:01.723316  480112 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:40:01.723329  480112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:40:01.723398  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:40:01.723446  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.742177  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.847102  480112 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:40:01.850854  480112 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:40:01.850880  480112 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:40:01.850885  480112 command_runner.go:130] > VERSION_ID="12"
	I1205 06:40:01.850889  480112 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:40:01.850897  480112 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:40:01.850901  480112 command_runner.go:130] > ID=debian
	I1205 06:40:01.850906  480112 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:40:01.850910  480112 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:40:01.850918  480112 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:40:01.850955  480112 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:40:01.850978  480112 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:40:01.850990  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:40:01.851049  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:40:01.851138  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:40:01.851149  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /etc/ssl/certs/4441472.pem
	I1205 06:40:01.851230  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:40:01.851237  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> /etc/test/nested/copy/444147/hosts
	I1205 06:40:01.851282  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:40:01.859516  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:01.879483  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:40:01.898655  480112 start.go:296] duration metric: took 175.324245ms for postStartSetup
	I1205 06:40:01.898744  480112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:40:01.898799  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.917838  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.020238  480112 command_runner.go:130] > 18%
	I1205 06:40:02.020354  480112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:40:02.025815  480112 command_runner.go:130] > 160G
	I1205 06:40:02.026493  480112 fix.go:56] duration metric: took 1.979519007s for fixHost
	I1205 06:40:02.026516  480112 start.go:83] releasing machines lock for "functional-787602", held for 1.979574696s
	I1205 06:40:02.026587  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:02.046979  480112 ssh_runner.go:195] Run: cat /version.json
	I1205 06:40:02.047030  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.047280  480112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:40:02.047345  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.081102  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.085747  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.189932  480112 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:40:02.190072  480112 ssh_runner.go:195] Run: systemctl --version
	I1205 06:40:02.280062  480112 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:40:02.282950  480112 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:40:02.282989  480112 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:40:02.283061  480112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:40:02.319896  480112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:40:02.324212  480112 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:40:02.324374  480112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:40:02.324444  480112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:40:02.332670  480112 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:40:02.332736  480112 start.go:496] detecting cgroup driver to use...
	I1205 06:40:02.332774  480112 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:40:02.332831  480112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:40:02.348502  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:40:02.361851  480112 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:40:02.361926  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:40:02.380602  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:40:02.393710  480112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:40:02.522109  480112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:40:02.655884  480112 docker.go:234] disabling docker service ...
	I1205 06:40:02.655958  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:40:02.673330  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:40:02.687649  480112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:40:02.802223  480112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:40:02.930343  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:40:02.944017  480112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:40:02.956898  480112 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 06:40:02.958122  480112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:40:02.958248  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.967567  480112 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:40:02.967712  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.976781  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.985897  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.994984  480112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:40:03.003975  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.013874  480112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.022919  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.032163  480112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:40:03.038816  480112 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:40:03.039990  480112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:40:03.049427  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.175291  480112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:40:03.341374  480112 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:40:03.341477  480112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:40:03.345425  480112 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 06:40:03.345448  480112 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:40:03.345464  480112 command_runner.go:130] > Device: 0,73	Inode: 1755        Links: 1
	I1205 06:40:03.345472  480112 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:03.345477  480112 command_runner.go:130] > Access: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345484  480112 command_runner.go:130] > Modify: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345489  480112 command_runner.go:130] > Change: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345493  480112 command_runner.go:130] >  Birth: -
	I1205 06:40:03.345525  480112 start.go:564] Will wait 60s for crictl version
	I1205 06:40:03.345579  480112 ssh_runner.go:195] Run: which crictl
	I1205 06:40:03.348931  480112 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:40:03.349401  480112 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:40:03.373825  480112 command_runner.go:130] > Version:  0.1.0
	I1205 06:40:03.373849  480112 command_runner.go:130] > RuntimeName:  cri-o
	I1205 06:40:03.373973  480112 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1205 06:40:03.374159  480112 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:40:03.376168  480112 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:40:03.376252  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.403613  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.403690  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.403710  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.403727  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.403756  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.403777  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.403795  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.403813  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.403844  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.403865  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.403879  480112 command_runner.go:130] >      static
	I1205 06:40:03.403895  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.403924  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.403945  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.403964  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.403979  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.404006  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.404027  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.404044  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.404059  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.406234  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.432776  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.432811  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.432836  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.432843  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.432849  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.432862  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.432872  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.432877  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.432886  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.432908  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.432916  480112 command_runner.go:130] >      static
	I1205 06:40:03.432920  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.432948  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.432956  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.432959  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.432963  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.432970  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.432998  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.433006  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.433010  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.440242  480112 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:40:03.443151  480112 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:40:03.459691  480112 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:40:03.463610  480112 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:40:03.463748  480112 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:40:03.463853  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:40:03.463910  480112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:40:03.497207  480112 command_runner.go:130] > {
	I1205 06:40:03.497226  480112 command_runner.go:130] >   "images":  [
	I1205 06:40:03.497231  480112 command_runner.go:130] >     {
	I1205 06:40:03.497239  480112 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:40:03.497244  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497250  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:40:03.497253  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497257  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497267  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1205 06:40:03.497271  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497276  480112 command_runner.go:130] >       "size":  "29035622",
	I1205 06:40:03.497279  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497283  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497286  480112 command_runner.go:130] >     },
	I1205 06:40:03.497290  480112 command_runner.go:130] >     {
	I1205 06:40:03.497297  480112 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:40:03.497301  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497306  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:40:03.497309  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497313  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497321  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1205 06:40:03.497324  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497328  480112 command_runner.go:130] >       "size":  "74488375",
	I1205 06:40:03.497332  480112 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:40:03.497336  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497340  480112 command_runner.go:130] >     },
	I1205 06:40:03.497343  480112 command_runner.go:130] >     {
	I1205 06:40:03.497350  480112 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:40:03.497354  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497359  480112 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:40:03.497362  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497366  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497388  480112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1205 06:40:03.497393  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497397  480112 command_runner.go:130] >       "size":  "60854229",
	I1205 06:40:03.497401  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497405  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497409  480112 command_runner.go:130] >       },
	I1205 06:40:03.497413  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497417  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497421  480112 command_runner.go:130] >     },
	I1205 06:40:03.497424  480112 command_runner.go:130] >     {
	I1205 06:40:03.497430  480112 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:40:03.497434  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497439  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:40:03.497442  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497446  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497454  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1205 06:40:03.497459  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497463  480112 command_runner.go:130] >       "size":  "84947242",
	I1205 06:40:03.497466  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497469  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497473  480112 command_runner.go:130] >       },
	I1205 06:40:03.497476  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497480  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497483  480112 command_runner.go:130] >     },
	I1205 06:40:03.497486  480112 command_runner.go:130] >     {
	I1205 06:40:03.497492  480112 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:40:03.497496  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497501  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:40:03.497505  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497509  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497517  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1205 06:40:03.497520  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497529  480112 command_runner.go:130] >       "size":  "72167568",
	I1205 06:40:03.497539  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497542  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497545  480112 command_runner.go:130] >       },
	I1205 06:40:03.497549  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497552  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497555  480112 command_runner.go:130] >     },
	I1205 06:40:03.497558  480112 command_runner.go:130] >     {
	I1205 06:40:03.497564  480112 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:40:03.497568  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497573  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:40:03.497575  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497579  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497588  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1205 06:40:03.497592  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497595  480112 command_runner.go:130] >       "size":  "74105124",
	I1205 06:40:03.497599  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497603  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497606  480112 command_runner.go:130] >     },
	I1205 06:40:03.497609  480112 command_runner.go:130] >     {
	I1205 06:40:03.497615  480112 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:40:03.497618  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497624  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:40:03.497627  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497630  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497638  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1205 06:40:03.497641  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497645  480112 command_runner.go:130] >       "size":  "49819792",
	I1205 06:40:03.497648  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497652  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497655  480112 command_runner.go:130] >       },
	I1205 06:40:03.497659  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497663  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497666  480112 command_runner.go:130] >     },
	I1205 06:40:03.497672  480112 command_runner.go:130] >     {
	I1205 06:40:03.497679  480112 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:40:03.497683  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497687  480112 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.497690  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497694  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497701  480112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1205 06:40:03.497705  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497708  480112 command_runner.go:130] >       "size":  "517328",
	I1205 06:40:03.497712  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497715  480112 command_runner.go:130] >         "value":  "65535"
	I1205 06:40:03.497718  480112 command_runner.go:130] >       },
	I1205 06:40:03.497722  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497726  480112 command_runner.go:130] >       "pinned":  true
	I1205 06:40:03.497729  480112 command_runner.go:130] >     }
	I1205 06:40:03.497732  480112 command_runner.go:130] >   ]
	I1205 06:40:03.497735  480112 command_runner.go:130] > }
	I1205 06:40:03.499390  480112 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:40:03.499408  480112 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:40:03.499417  480112 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:40:03.499515  480112 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:40:03.499587  480112 ssh_runner.go:195] Run: crio config
	I1205 06:40:03.548638  480112 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 06:40:03.548661  480112 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 06:40:03.548669  480112 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 06:40:03.548671  480112 command_runner.go:130] > #
	I1205 06:40:03.548686  480112 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 06:40:03.548693  480112 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 06:40:03.548700  480112 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 06:40:03.548716  480112 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 06:40:03.548720  480112 command_runner.go:130] > # reload'.
	I1205 06:40:03.548726  480112 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 06:40:03.548733  480112 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 06:40:03.548739  480112 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 06:40:03.548745  480112 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 06:40:03.548748  480112 command_runner.go:130] > [crio]
	I1205 06:40:03.548755  480112 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 06:40:03.548760  480112 command_runner.go:130] > # containers images, in this directory.
	I1205 06:40:03.549179  480112 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 06:40:03.549226  480112 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 06:40:03.549246  480112 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1205 06:40:03.549268  480112 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 06:40:03.549287  480112 command_runner.go:130] > # imagestore = ""
	I1205 06:40:03.549306  480112 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 06:40:03.549324  480112 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 06:40:03.549341  480112 command_runner.go:130] > # storage_driver = "overlay"
	I1205 06:40:03.549356  480112 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 06:40:03.549385  480112 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 06:40:03.549402  480112 command_runner.go:130] > # storage_option = [
	I1205 06:40:03.549417  480112 command_runner.go:130] > # ]
	I1205 06:40:03.549435  480112 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 06:40:03.549461  480112 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 06:40:03.549487  480112 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 06:40:03.549504  480112 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 06:40:03.549521  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 06:40:03.549545  480112 command_runner.go:130] > # always happen on a node reboot
	I1205 06:40:03.549737  480112 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 06:40:03.549768  480112 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 06:40:03.549775  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 06:40:03.549781  480112 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 06:40:03.549785  480112 command_runner.go:130] > # version_file_persist = ""
	I1205 06:40:03.549793  480112 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 06:40:03.549801  480112 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 06:40:03.549805  480112 command_runner.go:130] > # internal_wipe = true
	I1205 06:40:03.549813  480112 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 06:40:03.549818  480112 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 06:40:03.549822  480112 command_runner.go:130] > # internal_repair = true
	I1205 06:40:03.549828  480112 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 06:40:03.549834  480112 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 06:40:03.549840  480112 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 06:40:03.549845  480112 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 06:40:03.549854  480112 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 06:40:03.549858  480112 command_runner.go:130] > [crio.api]
	I1205 06:40:03.549863  480112 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 06:40:03.549867  480112 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 06:40:03.549872  480112 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 06:40:03.549876  480112 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 06:40:03.549883  480112 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 06:40:03.549889  480112 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 06:40:03.549892  480112 command_runner.go:130] > # stream_port = "0"
	I1205 06:40:03.549897  480112 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 06:40:03.549901  480112 command_runner.go:130] > # stream_enable_tls = false
	I1205 06:40:03.549907  480112 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 06:40:03.549911  480112 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 06:40:03.549917  480112 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 06:40:03.549923  480112 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549927  480112 command_runner.go:130] > # stream_tls_cert = ""
	I1205 06:40:03.549933  480112 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 06:40:03.549939  480112 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549942  480112 command_runner.go:130] > # stream_tls_key = ""
	I1205 06:40:03.549948  480112 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 06:40:03.549954  480112 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 06:40:03.549958  480112 command_runner.go:130] > # automatically pick up the changes.
	I1205 06:40:03.549962  480112 command_runner.go:130] > # stream_tls_ca = ""
	I1205 06:40:03.549979  480112 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549984  480112 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 06:40:03.549991  480112 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549996  480112 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 06:40:03.550002  480112 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 06:40:03.550007  480112 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 06:40:03.550010  480112 command_runner.go:130] > [crio.runtime]
	I1205 06:40:03.550016  480112 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 06:40:03.550021  480112 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 06:40:03.550025  480112 command_runner.go:130] > # "nofile=1024:2048"
	I1205 06:40:03.550034  480112 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 06:40:03.550038  480112 command_runner.go:130] > # default_ulimits = [
	I1205 06:40:03.550041  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550047  480112 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 06:40:03.550050  480112 command_runner.go:130] > # no_pivot = false
	I1205 06:40:03.550056  480112 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 06:40:03.550062  480112 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 06:40:03.550067  480112 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 06:40:03.550072  480112 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 06:40:03.550077  480112 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 06:40:03.550084  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550087  480112 command_runner.go:130] > # conmon = ""
	I1205 06:40:03.550092  480112 command_runner.go:130] > # Cgroup setting for conmon
	I1205 06:40:03.550099  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 06:40:03.550102  480112 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 06:40:03.550108  480112 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 06:40:03.550115  480112 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 06:40:03.550124  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550128  480112 command_runner.go:130] > # conmon_env = [
	I1205 06:40:03.550130  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550136  480112 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 06:40:03.550141  480112 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 06:40:03.550146  480112 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 06:40:03.550150  480112 command_runner.go:130] > # default_env = [
	I1205 06:40:03.550152  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550158  480112 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 06:40:03.550165  480112 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 06:40:03.550169  480112 command_runner.go:130] > # selinux = false
	I1205 06:40:03.550180  480112 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 06:40:03.550188  480112 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1205 06:40:03.550193  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550197  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.550202  480112 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1205 06:40:03.550212  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550216  480112 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1205 06:40:03.550223  480112 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 06:40:03.550229  480112 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 06:40:03.550235  480112 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 06:40:03.550241  480112 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 06:40:03.550246  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550250  480112 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 06:40:03.550255  480112 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 06:40:03.550259  480112 command_runner.go:130] > # the cgroup blockio controller.
	I1205 06:40:03.550263  480112 command_runner.go:130] > # blockio_config_file = ""
	I1205 06:40:03.550269  480112 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 06:40:03.550273  480112 command_runner.go:130] > # blockio parameters.
	I1205 06:40:03.550277  480112 command_runner.go:130] > # blockio_reload = false
	I1205 06:40:03.550284  480112 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 06:40:03.550287  480112 command_runner.go:130] > # irqbalance daemon.
	I1205 06:40:03.550292  480112 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 06:40:03.550298  480112 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 06:40:03.550305  480112 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 06:40:03.550313  480112 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 06:40:03.550319  480112 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 06:40:03.550325  480112 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 06:40:03.550330  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550333  480112 command_runner.go:130] > # rdt_config_file = ""
	I1205 06:40:03.550338  480112 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 06:40:03.550342  480112 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 06:40:03.550348  480112 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 06:40:03.550711  480112 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 06:40:03.550724  480112 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 06:40:03.550731  480112 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 06:40:03.550734  480112 command_runner.go:130] > # will be added.
	I1205 06:40:03.550738  480112 command_runner.go:130] > # default_capabilities = [
	I1205 06:40:03.550742  480112 command_runner.go:130] > # 	"CHOWN",
	I1205 06:40:03.550746  480112 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 06:40:03.550749  480112 command_runner.go:130] > # 	"FSETID",
	I1205 06:40:03.550752  480112 command_runner.go:130] > # 	"FOWNER",
	I1205 06:40:03.550756  480112 command_runner.go:130] > # 	"SETGID",
	I1205 06:40:03.550759  480112 command_runner.go:130] > # 	"SETUID",
	I1205 06:40:03.550782  480112 command_runner.go:130] > # 	"SETPCAP",
	I1205 06:40:03.550786  480112 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 06:40:03.550789  480112 command_runner.go:130] > # 	"KILL",
	I1205 06:40:03.550792  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550800  480112 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 06:40:03.550810  480112 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 06:40:03.550815  480112 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 06:40:03.550821  480112 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 06:40:03.550827  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550831  480112 command_runner.go:130] > default_sysctls = [
	I1205 06:40:03.550835  480112 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 06:40:03.550838  480112 command_runner.go:130] > ]
	I1205 06:40:03.550842  480112 command_runner.go:130] > # List of devices on the host that a
	I1205 06:40:03.550849  480112 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 06:40:03.550852  480112 command_runner.go:130] > # allowed_devices = [
	I1205 06:40:03.550856  480112 command_runner.go:130] > # 	"/dev/fuse",
	I1205 06:40:03.550859  480112 command_runner.go:130] > # 	"/dev/net/tun",
	I1205 06:40:03.550863  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550867  480112 command_runner.go:130] > # List of additional devices. specified as
	I1205 06:40:03.550875  480112 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 06:40:03.550880  480112 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 06:40:03.550886  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550889  480112 command_runner.go:130] > # additional_devices = [
	I1205 06:40:03.550894  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550899  480112 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 06:40:03.550905  480112 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 06:40:03.550909  480112 command_runner.go:130] > # 	"/etc/cdi",
	I1205 06:40:03.550912  480112 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 06:40:03.550915  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550921  480112 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 06:40:03.550927  480112 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 06:40:03.550931  480112 command_runner.go:130] > # Defaults to false.
	I1205 06:40:03.550936  480112 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 06:40:03.550942  480112 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 06:40:03.550949  480112 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 06:40:03.550952  480112 command_runner.go:130] > # hooks_dir = [
	I1205 06:40:03.550956  480112 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 06:40:03.550962  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550972  480112 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 06:40:03.550979  480112 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 06:40:03.550984  480112 command_runner.go:130] > # its default mounts from the following two files:
	I1205 06:40:03.550987  480112 command_runner.go:130] > #
	I1205 06:40:03.550993  480112 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 06:40:03.550999  480112 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 06:40:03.551004  480112 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 06:40:03.551007  480112 command_runner.go:130] > #
	I1205 06:40:03.551013  480112 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 06:40:03.551019  480112 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 06:40:03.551025  480112 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 06:40:03.551030  480112 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 06:40:03.551032  480112 command_runner.go:130] > #
	I1205 06:40:03.551036  480112 command_runner.go:130] > # default_mounts_file = ""
	I1205 06:40:03.551041  480112 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 06:40:03.551047  480112 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 06:40:03.551051  480112 command_runner.go:130] > # pids_limit = -1
	I1205 06:40:03.551057  480112 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 06:40:03.551063  480112 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 06:40:03.551069  480112 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 06:40:03.551077  480112 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 06:40:03.551080  480112 command_runner.go:130] > # log_size_max = -1
	I1205 06:40:03.551087  480112 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 06:40:03.551091  480112 command_runner.go:130] > # log_to_journald = false
	I1205 06:40:03.551098  480112 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 06:40:03.551103  480112 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 06:40:03.551108  480112 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 06:40:03.551113  480112 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 06:40:03.551118  480112 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 06:40:03.551121  480112 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 06:40:03.551127  480112 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 06:40:03.551131  480112 command_runner.go:130] > # read_only = false
	I1205 06:40:03.551137  480112 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 06:40:03.551147  480112 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 06:40:03.551151  480112 command_runner.go:130] > # live configuration reload.
	I1205 06:40:03.551154  480112 command_runner.go:130] > # log_level = "info"
	I1205 06:40:03.551160  480112 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 06:40:03.551164  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.551168  480112 command_runner.go:130] > # log_filter = ""
	I1205 06:40:03.551174  480112 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551180  480112 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 06:40:03.551184  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551192  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551196  480112 command_runner.go:130] > # uid_mappings = ""
	I1205 06:40:03.551201  480112 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551208  480112 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 06:40:03.551212  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551219  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551223  480112 command_runner.go:130] > # gid_mappings = ""
	I1205 06:40:03.551229  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 06:40:03.551235  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551241  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551249  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551253  480112 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 06:40:03.551259  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 06:40:03.551264  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551271  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551278  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551282  480112 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 06:40:03.551288  480112 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 06:40:03.551296  480112 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 06:40:03.551302  480112 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 06:40:03.551306  480112 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 06:40:03.551311  480112 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 06:40:03.551317  480112 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 06:40:03.551322  480112 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 06:40:03.551330  480112 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 06:40:03.551333  480112 command_runner.go:130] > # drop_infra_ctr = true
	I1205 06:40:03.551340  480112 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 06:40:03.551346  480112 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 06:40:03.551353  480112 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 06:40:03.551357  480112 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 06:40:03.551364  480112 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 06:40:03.551370  480112 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 06:40:03.551375  480112 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 06:40:03.551380  480112 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 06:40:03.551384  480112 command_runner.go:130] > # shared_cpuset = ""
	I1205 06:40:03.551390  480112 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 06:40:03.551395  480112 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 06:40:03.551398  480112 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 06:40:03.551405  480112 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 06:40:03.551408  480112 command_runner.go:130] > # pinns_path = ""
	I1205 06:40:03.551414  480112 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 06:40:03.551420  480112 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 06:40:03.551424  480112 command_runner.go:130] > # enable_criu_support = true
	I1205 06:40:03.551428  480112 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 06:40:03.551434  480112 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 06:40:03.551438  480112 command_runner.go:130] > # enable_pod_events = false
	I1205 06:40:03.551444  480112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 06:40:03.551449  480112 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 06:40:03.551453  480112 command_runner.go:130] > # default_runtime = "crun"
	I1205 06:40:03.551458  480112 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 06:40:03.551466  480112 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 06:40:03.551475  480112 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 06:40:03.551480  480112 command_runner.go:130] > # creation as a file is not desired either.
	I1205 06:40:03.551488  480112 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 06:40:03.551495  480112 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 06:40:03.551499  480112 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 06:40:03.551502  480112 command_runner.go:130] > # ]
	I1205 06:40:03.551511  480112 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 06:40:03.551518  480112 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 06:40:03.551524  480112 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 06:40:03.551528  480112 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 06:40:03.551532  480112 command_runner.go:130] > #
	I1205 06:40:03.551536  480112 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 06:40:03.551541  480112 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 06:40:03.551544  480112 command_runner.go:130] > # runtime_type = "oci"
	I1205 06:40:03.551549  480112 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 06:40:03.551553  480112 command_runner.go:130] > # inherit_default_runtime = false
	I1205 06:40:03.551558  480112 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 06:40:03.551562  480112 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 06:40:03.551566  480112 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 06:40:03.551570  480112 command_runner.go:130] > # monitor_env = []
	I1205 06:40:03.551574  480112 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 06:40:03.551578  480112 command_runner.go:130] > # allowed_annotations = []
	I1205 06:40:03.551583  480112 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 06:40:03.551587  480112 command_runner.go:130] > # no_sync_log = false
	I1205 06:40:03.551590  480112 command_runner.go:130] > # default_annotations = {}
	I1205 06:40:03.551594  480112 command_runner.go:130] > # stream_websockets = false
	I1205 06:40:03.551598  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.551631  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.551636  480112 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 06:40:03.551643  480112 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 06:40:03.551649  480112 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 06:40:03.551656  480112 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 06:40:03.551659  480112 command_runner.go:130] > #   in $PATH.
	I1205 06:40:03.551665  480112 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 06:40:03.551669  480112 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 06:40:03.551675  480112 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 06:40:03.551678  480112 command_runner.go:130] > #   state.
	I1205 06:40:03.551685  480112 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 06:40:03.551690  480112 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 06:40:03.551699  480112 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1205 06:40:03.551706  480112 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1205 06:40:03.551711  480112 command_runner.go:130] > #   the values from the default runtime on load time.
	I1205 06:40:03.551717  480112 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 06:40:03.551723  480112 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 06:40:03.551730  480112 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 06:40:03.551736  480112 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 06:40:03.551740  480112 command_runner.go:130] > #   The currently recognized values are:
	I1205 06:40:03.551747  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 06:40:03.551754  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 06:40:03.551761  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 06:40:03.551767  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 06:40:03.551774  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 06:40:03.551781  480112 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 06:40:03.551788  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 06:40:03.551794  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 06:40:03.551800  480112 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 06:40:03.551807  480112 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1205 06:40:03.551813  480112 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1205 06:40:03.551819  480112 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1205 06:40:03.551828  480112 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1205 06:40:03.551834  480112 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1205 06:40:03.551840  480112 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1205 06:40:03.551848  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1205 06:40:03.551854  480112 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 06:40:03.551858  480112 command_runner.go:130] > #   deprecated option "conmon".
	I1205 06:40:03.551865  480112 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 06:40:03.551870  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 06:40:03.551877  480112 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 06:40:03.551882  480112 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 06:40:03.551888  480112 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1205 06:40:03.551893  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 06:40:03.551900  480112 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1205 06:40:03.551907  480112 command_runner.go:130] > #   conmon-rs by using:
	I1205 06:40:03.551915  480112 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1205 06:40:03.551924  480112 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1205 06:40:03.551931  480112 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1205 06:40:03.551937  480112 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 06:40:03.551943  480112 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 06:40:03.551950  480112 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1205 06:40:03.551958  480112 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1205 06:40:03.551964  480112 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1205 06:40:03.551971  480112 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1205 06:40:03.551979  480112 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1205 06:40:03.551983  480112 command_runner.go:130] > #   when a machine crash happens.
	I1205 06:40:03.551990  480112 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1205 06:40:03.551997  480112 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1205 06:40:03.552005  480112 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1205 06:40:03.552009  480112 command_runner.go:130] > #   seccomp profile for the runtime.
	I1205 06:40:03.552015  480112 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1205 06:40:03.552022  480112 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1205 06:40:03.552025  480112 command_runner.go:130] > #
	I1205 06:40:03.552029  480112 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 06:40:03.552032  480112 command_runner.go:130] > #
	I1205 06:40:03.552038  480112 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 06:40:03.552044  480112 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 06:40:03.552046  480112 command_runner.go:130] > #
	I1205 06:40:03.552053  480112 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 06:40:03.552058  480112 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 06:40:03.552061  480112 command_runner.go:130] > #
	I1205 06:40:03.552067  480112 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 06:40:03.552070  480112 command_runner.go:130] > # feature.
	I1205 06:40:03.552072  480112 command_runner.go:130] > #
	I1205 06:40:03.552078  480112 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 06:40:03.552085  480112 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 06:40:03.552090  480112 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 06:40:03.552104  480112 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 06:40:03.552111  480112 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 06:40:03.552114  480112 command_runner.go:130] > #
	I1205 06:40:03.552121  480112 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 06:40:03.552127  480112 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 06:40:03.552129  480112 command_runner.go:130] > #
	I1205 06:40:03.552135  480112 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 06:40:03.552141  480112 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 06:40:03.552144  480112 command_runner.go:130] > #
	I1205 06:40:03.552150  480112 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 06:40:03.552156  480112 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 06:40:03.552159  480112 command_runner.go:130] > # limitation.
	I1205 06:40:03.552163  480112 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1205 06:40:03.552167  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1205 06:40:03.552170  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552174  480112 command_runner.go:130] > runtime_root = "/run/crun"
	I1205 06:40:03.552178  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552182  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552188  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552193  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552197  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552200  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552204  480112 command_runner.go:130] > allowed_annotations = [
	I1205 06:40:03.552208  480112 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1205 06:40:03.552211  480112 command_runner.go:130] > ]
	I1205 06:40:03.552215  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552219  480112 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 06:40:03.552223  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1205 06:40:03.552226  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552230  480112 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 06:40:03.552234  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552237  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552241  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552248  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552252  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552256  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552260  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552267  480112 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 06:40:03.552272  480112 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 06:40:03.552278  480112 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 06:40:03.552286  480112 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 06:40:03.552300  480112 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1205 06:40:03.552310  480112 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1205 06:40:03.552319  480112 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1205 06:40:03.552324  480112 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 06:40:03.552334  480112 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 06:40:03.552342  480112 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 06:40:03.552349  480112 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 06:40:03.552356  480112 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 06:40:03.552359  480112 command_runner.go:130] > # Example:
	I1205 06:40:03.552364  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 06:40:03.552368  480112 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 06:40:03.552373  480112 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 06:40:03.552382  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 06:40:03.552385  480112 command_runner.go:130] > # cpuset = "0-1"
	I1205 06:40:03.552389  480112 command_runner.go:130] > # cpushares = "5"
	I1205 06:40:03.552392  480112 command_runner.go:130] > # cpuquota = "1000"
	I1205 06:40:03.552396  480112 command_runner.go:130] > # cpuperiod = "100000"
	I1205 06:40:03.552399  480112 command_runner.go:130] > # cpulimit = "35"
	I1205 06:40:03.552402  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.552406  480112 command_runner.go:130] > # The workload name is workload-type.
	I1205 06:40:03.552413  480112 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 06:40:03.552419  480112 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 06:40:03.552424  480112 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 06:40:03.552432  480112 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 06:40:03.552438  480112 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 06:40:03.552445  480112 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 06:40:03.552452  480112 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 06:40:03.552456  480112 command_runner.go:130] > # Default value is set to true
	I1205 06:40:03.552461  480112 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 06:40:03.552466  480112 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 06:40:03.552471  480112 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 06:40:03.552475  480112 command_runner.go:130] > # Default value is set to 'false'
	I1205 06:40:03.552479  480112 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 06:40:03.552484  480112 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1205 06:40:03.552492  480112 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1205 06:40:03.552495  480112 command_runner.go:130] > # timezone = ""
	I1205 06:40:03.552502  480112 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 06:40:03.552504  480112 command_runner.go:130] > #
	I1205 06:40:03.552510  480112 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 06:40:03.552517  480112 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1205 06:40:03.552520  480112 command_runner.go:130] > [crio.image]
	I1205 06:40:03.552526  480112 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 06:40:03.552530  480112 command_runner.go:130] > # default_transport = "docker://"
	I1205 06:40:03.552536  480112 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 06:40:03.552543  480112 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552547  480112 command_runner.go:130] > # global_auth_file = ""
	I1205 06:40:03.552552  480112 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 06:40:03.552557  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552561  480112 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.552568  480112 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 06:40:03.552574  480112 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552581  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552585  480112 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 06:40:03.552591  480112 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 06:40:03.552597  480112 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 06:40:03.552603  480112 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 06:40:03.552608  480112 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 06:40:03.552612  480112 command_runner.go:130] > # pause_command = "/pause"
	I1205 06:40:03.552622  480112 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 06:40:03.552628  480112 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 06:40:03.552641  480112 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 06:40:03.552646  480112 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 06:40:03.552652  480112 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 06:40:03.552658  480112 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 06:40:03.552661  480112 command_runner.go:130] > # pinned_images = [
	I1205 06:40:03.552664  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552670  480112 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 06:40:03.552675  480112 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 06:40:03.552681  480112 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 06:40:03.552687  480112 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 06:40:03.552692  480112 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 06:40:03.552697  480112 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1205 06:40:03.552702  480112 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 06:40:03.552708  480112 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 06:40:03.552716  480112 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 06:40:03.552722  480112 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 06:40:03.552728  480112 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 06:40:03.552733  480112 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 06:40:03.552738  480112 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 06:40:03.552746  480112 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 06:40:03.552749  480112 command_runner.go:130] > # changing them here.
	I1205 06:40:03.552755  480112 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1205 06:40:03.552758  480112 command_runner.go:130] > # insecure_registries = [
	I1205 06:40:03.552761  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552767  480112 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 06:40:03.552772  480112 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 06:40:03.552776  480112 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 06:40:03.552780  480112 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 06:40:03.553031  480112 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 06:40:03.553083  480112 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1205 06:40:03.553106  480112 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1205 06:40:03.553125  480112 command_runner.go:130] > # auto_reload_registries = false
	I1205 06:40:03.553145  480112 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1205 06:40:03.553166  480112 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1205 06:40:03.553207  480112 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1205 06:40:03.553227  480112 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1205 06:40:03.553245  480112 command_runner.go:130] > # The mode of short name resolution.
	I1205 06:40:03.553268  480112 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1205 06:40:03.553288  480112 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1205 06:40:03.553305  480112 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1205 06:40:03.553320  480112 command_runner.go:130] > # short_name_mode = "enforcing"
	I1205 06:40:03.553338  480112 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1205 06:40:03.553365  480112 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1205 06:40:03.553538  480112 command_runner.go:130] > # oci_artifact_mount_support = true
	I1205 06:40:03.553551  480112 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 06:40:03.553555  480112 command_runner.go:130] > # CNI plugins.
	I1205 06:40:03.553559  480112 command_runner.go:130] > [crio.network]
	I1205 06:40:03.553564  480112 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 06:40:03.553570  480112 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 06:40:03.553574  480112 command_runner.go:130] > # cni_default_network = ""
	I1205 06:40:03.553580  480112 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 06:40:03.553587  480112 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 06:40:03.553592  480112 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 06:40:03.553597  480112 command_runner.go:130] > # plugin_dirs = [
	I1205 06:40:03.553600  480112 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 06:40:03.553603  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553607  480112 command_runner.go:130] > # List of included pod metrics.
	I1205 06:40:03.553616  480112 command_runner.go:130] > # included_pod_metrics = [
	I1205 06:40:03.553620  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553625  480112 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 06:40:03.553628  480112 command_runner.go:130] > [crio.metrics]
	I1205 06:40:03.553634  480112 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 06:40:03.553637  480112 command_runner.go:130] > # enable_metrics = false
	I1205 06:40:03.553641  480112 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 06:40:03.553646  480112 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 06:40:03.553655  480112 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 06:40:03.553661  480112 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 06:40:03.553670  480112 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 06:40:03.553675  480112 command_runner.go:130] > # metrics_collectors = [
	I1205 06:40:03.553679  480112 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 06:40:03.553683  480112 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 06:40:03.553687  480112 command_runner.go:130] > # 	"containers_oom_total",
	I1205 06:40:03.553691  480112 command_runner.go:130] > # 	"processes_defunct",
	I1205 06:40:03.553695  480112 command_runner.go:130] > # 	"operations_total",
	I1205 06:40:03.553699  480112 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 06:40:03.553703  480112 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 06:40:03.553707  480112 command_runner.go:130] > # 	"operations_errors_total",
	I1205 06:40:03.553711  480112 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 06:40:03.553715  480112 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 06:40:03.553719  480112 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 06:40:03.553723  480112 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 06:40:03.553727  480112 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 06:40:03.553731  480112 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 06:40:03.553736  480112 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 06:40:03.553740  480112 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 06:40:03.553744  480112 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1205 06:40:03.553747  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553753  480112 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1205 06:40:03.553758  480112 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1205 06:40:03.553763  480112 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 06:40:03.553767  480112 command_runner.go:130] > # metrics_port = 9090
	I1205 06:40:03.553772  480112 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 06:40:03.553775  480112 command_runner.go:130] > # metrics_socket = ""
	I1205 06:40:03.553780  480112 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 06:40:03.553786  480112 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 06:40:03.553792  480112 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 06:40:03.553798  480112 command_runner.go:130] > # certificate on any modification event.
	I1205 06:40:03.553802  480112 command_runner.go:130] > # metrics_cert = ""
	I1205 06:40:03.553807  480112 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 06:40:03.553812  480112 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 06:40:03.553822  480112 command_runner.go:130] > # metrics_key = ""
	I1205 06:40:03.553828  480112 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 06:40:03.553831  480112 command_runner.go:130] > [crio.tracing]
	I1205 06:40:03.553836  480112 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 06:40:03.553841  480112 command_runner.go:130] > # enable_tracing = false
	I1205 06:40:03.553846  480112 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 06:40:03.553850  480112 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1205 06:40:03.553857  480112 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 06:40:03.553861  480112 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 06:40:03.553865  480112 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 06:40:03.553868  480112 command_runner.go:130] > [crio.nri]
	I1205 06:40:03.553872  480112 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 06:40:03.553876  480112 command_runner.go:130] > # enable_nri = true
	I1205 06:40:03.553880  480112 command_runner.go:130] > # NRI socket to listen on.
	I1205 06:40:03.553884  480112 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 06:40:03.553888  480112 command_runner.go:130] > # NRI plugin directory to use.
	I1205 06:40:03.553893  480112 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 06:40:03.553898  480112 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 06:40:03.553902  480112 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 06:40:03.553908  480112 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 06:40:03.553979  480112 command_runner.go:130] > # nri_disable_connections = false
	I1205 06:40:03.553985  480112 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 06:40:03.553990  480112 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 06:40:03.553995  480112 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 06:40:03.554000  480112 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 06:40:03.554004  480112 command_runner.go:130] > # NRI default validator configuration.
	I1205 06:40:03.554011  480112 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1205 06:40:03.554017  480112 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1205 06:40:03.554021  480112 command_runner.go:130] > # can be restricted/rejected:
	I1205 06:40:03.554025  480112 command_runner.go:130] > # - OCI hook injection
	I1205 06:40:03.554030  480112 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1205 06:40:03.554035  480112 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1205 06:40:03.554039  480112 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1205 06:40:03.554047  480112 command_runner.go:130] > # - adjustment of linux namespaces
	I1205 06:40:03.554054  480112 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1205 06:40:03.554060  480112 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1205 06:40:03.554066  480112 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1205 06:40:03.554070  480112 command_runner.go:130] > #
	I1205 06:40:03.554075  480112 command_runner.go:130] > # [crio.nri.default_validator]
	I1205 06:40:03.554079  480112 command_runner.go:130] > # nri_enable_default_validator = false
	I1205 06:40:03.554084  480112 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1205 06:40:03.554090  480112 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1205 06:40:03.554095  480112 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1205 06:40:03.554101  480112 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1205 06:40:03.554106  480112 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1205 06:40:03.554110  480112 command_runner.go:130] > # nri_validator_required_plugins = [
	I1205 06:40:03.554113  480112 command_runner.go:130] > # ]
	I1205 06:40:03.554118  480112 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1205 06:40:03.554124  480112 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 06:40:03.554127  480112 command_runner.go:130] > [crio.stats]
	I1205 06:40:03.554133  480112 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 06:40:03.554138  480112 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 06:40:03.554142  480112 command_runner.go:130] > # stats_collection_period = 0
	I1205 06:40:03.554148  480112 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1205 06:40:03.554154  480112 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1205 06:40:03.554158  480112 command_runner.go:130] > # collection_period = 0
	I1205 06:40:03.556162  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527241832Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1205 06:40:03.556207  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527278608Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1205 06:40:03.556230  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527308122Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1205 06:40:03.556255  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.52733264Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1205 06:40:03.556280  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527409367Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.556295  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527814951Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1205 06:40:03.556306  480112 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 06:40:03.556383  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:40:03.556397  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:40:03.556420  480112 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:40:03.556447  480112 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:40:03.556582  480112 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:40:03.556659  480112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:40:03.563611  480112 command_runner.go:130] > kubeadm
	I1205 06:40:03.563630  480112 command_runner.go:130] > kubectl
	I1205 06:40:03.563636  480112 command_runner.go:130] > kubelet
	I1205 06:40:03.564590  480112 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:40:03.564681  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:40:03.572146  480112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:40:03.584914  480112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:40:03.598402  480112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 06:40:03.610806  480112 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:40:03.614247  480112 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:40:03.614336  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.749526  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:04.526831  480112 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:40:04.526920  480112 certs.go:195] generating shared ca certs ...
	I1205 06:40:04.526970  480112 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:04.527146  480112 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:40:04.527262  480112 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:40:04.527298  480112 certs.go:257] generating profile certs ...
	I1205 06:40:04.527454  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:40:04.527572  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:40:04.527654  480112 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:40:04.527683  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:40:04.527717  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:40:04.527750  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:40:04.527779  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:40:04.527812  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:40:04.527845  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:40:04.527901  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:40:04.527942  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:40:04.528018  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:40:04.528084  480112 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:40:04.528110  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:40:04.528175  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:40:04.528223  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:40:04.528266  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:40:04.528351  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:04.528416  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.528448  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem -> /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.528484  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.529122  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:40:04.549434  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:40:04.568942  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:40:04.588032  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:40:04.616779  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:40:04.636137  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:40:04.655504  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:40:04.673755  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:40:04.692822  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:40:04.711199  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:40:04.730794  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:40:04.748559  480112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:40:04.762229  480112 ssh_runner.go:195] Run: openssl version
	I1205 06:40:04.768327  480112 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:40:04.768697  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.776287  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:40:04.784133  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788189  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788221  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788277  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.829541  480112 command_runner.go:130] > b5213941
	I1205 06:40:04.829985  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:40:04.837884  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.845797  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:40:04.853974  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.857841  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858230  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858295  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.900152  480112 command_runner.go:130] > 51391683
	I1205 06:40:04.900696  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:40:04.908660  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.916381  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:40:04.924345  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928449  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928489  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928538  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.969475  480112 command_runner.go:130] > 3ec20f2e
	I1205 06:40:04.969979  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:40:04.977627  480112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981676  480112 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981703  480112 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:40:04.981710  480112 command_runner.go:130] > Device: 259,1	Inode: 1046940     Links: 1
	I1205 06:40:04.981717  480112 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:04.981724  480112 command_runner.go:130] > Access: 2025-12-05 06:35:56.052204819 +0000
	I1205 06:40:04.981729  480112 command_runner.go:130] > Modify: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981735  480112 command_runner.go:130] > Change: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981741  480112 command_runner.go:130] >  Birth: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981812  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:40:05.025511  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.026281  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:40:05.067472  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.067923  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:40:05.109199  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.110439  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:40:05.151291  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.151789  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:40:05.192630  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.193112  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:40:05.234917  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.235493  480112 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:40:05.235576  480112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:40:05.235658  480112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:40:05.274773  480112 cri.go:89] found id: ""
	I1205 06:40:05.274854  480112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:40:05.284543  480112 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:40:05.284569  480112 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:40:05.284576  480112 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:40:05.284587  480112 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:40:05.284593  480112 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:40:05.284641  480112 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:40:05.293745  480112 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:40:05.294169  480112 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-787602" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.294277  480112 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-441321/kubeconfig needs updating (will repair): [kubeconfig missing "functional-787602" cluster setting kubeconfig missing "functional-787602" context setting]
	I1205 06:40:05.294658  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.295082  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.295239  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.295723  480112 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:40:05.295760  480112 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:40:05.295766  480112 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:40:05.295771  480112 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:40:05.295779  480112 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:40:05.296148  480112 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:40:05.296228  480112 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:40:05.305058  480112 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:40:05.305103  480112 kubeadm.go:602] duration metric: took 20.504477ms to restartPrimaryControlPlane
	I1205 06:40:05.305113  480112 kubeadm.go:403] duration metric: took 69.632192ms to StartCluster
	I1205 06:40:05.305127  480112 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305185  480112 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.305773  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305969  480112 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:40:05.306285  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:05.306340  480112 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:40:05.306433  480112 addons.go:70] Setting storage-provisioner=true in profile "functional-787602"
	I1205 06:40:05.306448  480112 addons.go:239] Setting addon storage-provisioner=true in "functional-787602"
	I1205 06:40:05.306452  480112 addons.go:70] Setting default-storageclass=true in profile "functional-787602"
	I1205 06:40:05.306473  480112 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-787602"
	I1205 06:40:05.306480  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.306771  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.306997  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.310651  480112 out.go:179] * Verifying Kubernetes components...
	I1205 06:40:05.313979  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:05.339795  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.340007  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.340282  480112 addons.go:239] Setting addon default-storageclass=true in "functional-787602"
	I1205 06:40:05.340312  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.340728  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.361959  480112 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:40:05.364893  480112 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.364921  480112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:40:05.364987  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.384451  480112 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:05.384479  480112 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:40:05.384563  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.411372  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.432092  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.510112  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:05.550609  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.557147  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.275527  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275618  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275677  480112 retry.go:31] will retry after 247.926554ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275753  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275786  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275814  480112 retry.go:31] will retry after 139.276641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275869  480112 node_ready.go:35] waiting up to 6m0s for node "functional-787602" to be "Ready" ...
	I1205 06:40:06.275986  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.276069  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.276382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.415646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.474935  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.474981  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.475001  480112 retry.go:31] will retry after 366.421161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.524197  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.584795  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.584843  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.584873  480112 retry.go:31] will retry after 312.76439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.776120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.776655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.841962  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.898526  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.904086  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.904127  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.904149  480112 retry.go:31] will retry after 740.273906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.959857  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.963461  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.963497  480112 retry.go:31] will retry after 759.965783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.276975  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.277072  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.277469  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.645230  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:07.705790  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.705833  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.705854  480112 retry.go:31] will retry after 642.466008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.724048  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:07.776045  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.776157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.791584  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.795338  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.795382  480112 retry.go:31] will retry after 614.279076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:08.276605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:08.348828  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:08.405271  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.408500  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.408576  480112 retry.go:31] will retry after 1.343995427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.410740  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:08.473489  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.473541  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.473564  480112 retry.go:31] will retry after 1.078913702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.777094  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.777222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.777651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.276356  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.276453  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.276780  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.553646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:09.614016  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.614089  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.614116  480112 retry.go:31] will retry after 2.379780781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.753405  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:09.777031  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.777132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.777482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.813171  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.813239  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.813272  480112 retry.go:31] will retry after 1.978465808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:10.276816  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.277257  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:10.277348  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:10.776020  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.776102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.776363  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.276081  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.276155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.791876  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:11.850961  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:11.851011  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.851047  480112 retry.go:31] will retry after 1.715194365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.994161  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:12.058032  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:12.058079  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.058098  480112 retry.go:31] will retry after 2.989540966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.276377  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.276451  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.276701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:12.776111  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:12.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:13.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.276532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:13.567026  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:13.620219  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:13.623514  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.623554  480112 retry.go:31] will retry after 5.458226005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.776806  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.776876  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.777207  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.277043  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.277126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.277411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:14.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:15.048089  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:15.111053  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:15.111091  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.111112  480112 retry.go:31] will retry after 5.631155228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.276375  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.276443  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:15.776648  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.777039  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.276857  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.276930  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.776968  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.777300  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:16.777347  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:17.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.276495  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:17.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.276064  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.276137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.276439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:19.082075  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:19.143244  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:19.143293  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.143314  480112 retry.go:31] will retry after 4.646546475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.276638  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.276712  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.277087  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:19.277141  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:19.776926  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.777007  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.777341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.743196  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:20.776726  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.776805  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.777070  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.801108  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:20.801144  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:20.801162  480112 retry.go:31] will retry after 9.136671028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:21.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.276973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.277268  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:21.277311  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:21.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.276249  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.776313  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.776619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.276136  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.776172  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.776265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:23.776664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:23.790980  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:23.852305  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:23.852351  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:23.852373  480112 retry.go:31] will retry after 4.852638111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:24.276878  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.276951  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.277225  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:24.776145  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.276631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.776628  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.776885  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:25.776924  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:26.276685  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.276766  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.277101  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:26.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.777045  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.777350  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.277008  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.277082  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.776144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:28.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:28.276571  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:28.705256  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:28.766465  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:28.766519  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.766541  480112 retry.go:31] will retry after 15.718503653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.777014  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.276890  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.276967  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.277333  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.776501  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.776578  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.776920  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.938493  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:30.002212  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:30.002257  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.002277  480112 retry.go:31] will retry after 5.082732051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.276542  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.276613  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.276880  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:30.276935  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:30.776666  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.777100  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.276768  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.276846  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.277194  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.776934  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.777009  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.777271  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.276395  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.276491  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.276813  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.776164  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.776245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:32.776649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:33.776151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.276378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.776431  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.776497  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.776750  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:34.776788  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:35.085301  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:35.148531  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:35.152882  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.152918  480112 retry.go:31] will retry after 11.086200752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.276603  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:35.777106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.777182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.777443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:37.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.276271  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:37.276633  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:37.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.776452  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.276021  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.276100  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.276361  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.776193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:39.776575  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:40.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:40.776012  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.776078  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.276540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.776119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.776197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:42.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:42.276631  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:42.776277  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.776365  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.776691  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.776091  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.276566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.485984  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:44.554072  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:44.557893  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.557927  480112 retry.go:31] will retry after 22.628614414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.776735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:44.776781  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:45.276131  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:45.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.239320  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:46.276723  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.277080  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.296820  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:46.296888  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.296909  480112 retry.go:31] will retry after 16.475007469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.776108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.776261  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:47.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.276550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:47.276621  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.276087  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.276413  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.776153  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:49.276252  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:49.276666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:49.776457  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.776539  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.776814  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.276579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.776648  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.276069  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:51.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:52.276278  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.276689  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:52.776334  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.776409  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.776733  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.276508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:54.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.276515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:54.276568  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:54.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.776530  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.776853  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.276690  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.276781  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.277125  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.777039  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.777119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.777385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.276092  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.276176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.276480  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:56.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:57.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:57.776222  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.776307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.776615  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.276119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.776243  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.776317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:58.776608  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:59.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.276178  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:59.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.776216  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.776291  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.776616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:00.776689  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:01.276381  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.276456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.276781  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:01.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.772181  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:02.776748  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.776818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.777092  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:02.777132  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:02.828748  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:02.831873  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:02.831907  480112 retry.go:31] will retry after 23.767145255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:03.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.276184  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:03.776136  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.276224  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.276300  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.776644  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.776715  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.777004  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:05.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.276924  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.277214  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:05.277261  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:05.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.776532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.276212  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.776196  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.776601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:07.187370  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:07.246801  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:07.246844  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.246863  480112 retry.go:31] will retry after 35.018877023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.277002  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.277431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:07.277488  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:07.777040  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.777122  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.777377  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.776342  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.776663  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.276083  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.276449  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.776565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:09.776619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:10.276305  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.276400  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.276764  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:10.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.776250  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.776174  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:12.276067  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.276164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.276478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:12.276527  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:12.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.776235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.776538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.776688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:14.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.276498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:14.276544  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:14.776236  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.776308  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.776593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.276150  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.276414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.276430  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.776438  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:16.776478  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:17.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.276289  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.276578  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:17.776257  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.776333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.776671  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.276301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.276556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.776250  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.776326  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.776636  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:18.776691  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:19.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.276800  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:19.776580  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.776660  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.276771  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.276848  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.277227  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.777060  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.777195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.777544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:20.777604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:21.276075  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.276451  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:21.776144  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.276241  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.276600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.776229  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.776301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.776581  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:23.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.276514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:23.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:23.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:25.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.276273  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:25.276662  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:25.776023  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.776090  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.776414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.276503  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.599995  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:26.657664  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660860  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660976  480112 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:26.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.776182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.276161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.276457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:27.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:28.776230  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.776304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.776618  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.276325  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.276412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.276735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.776649  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.777083  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:29.777135  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:30.276977  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.277054  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.277385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:30.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.776171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.276794  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.276886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.277179  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.776939  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.777016  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.777293  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:31.777332  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:32.276045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:32.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.276086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:34.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.276702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:34.276756  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:34.776367  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.776450  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.776713  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.276380  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.276460  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.276788  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.776775  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.776849  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.777195  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:36.276774  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.276844  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.277103  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:36.277142  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:36.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.777059  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.777387  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.276088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.276166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:38.776611  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:39.276263  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.276598  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:39.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.776292  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.776600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:40.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:41.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:41.776294  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.776370  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.776711  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.266330  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:42.277622  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.277694  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.277960  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.360709  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361696  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361795  480112 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:42.365007  480112 out.go:179] * Enabled addons: 
	I1205 06:41:42.368666  480112 addons.go:530] duration metric: took 1m37.062317768s for enable addons: enabled=[]
	I1205 06:41:42.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:43.276187  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.276263  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.276622  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:43.276733  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:43.776170  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.776430  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.776531  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.776876  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:45.276870  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.277032  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.277745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:45.277837  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:45.776642  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.776716  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.777050  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.276852  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.276927  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.277279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.777045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.777126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.777396  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.776088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.776166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.776536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:47.776604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:48.276249  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.276340  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.276655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:48.776130  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.776156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.776445  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:50.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.276229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:50.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:50.776285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.776369  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.776728  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.276429  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.276528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.276792  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.776116  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.276150  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.776085  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:52.776560  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:53.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.276685  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:53.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.776608  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.276706  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.776729  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.776832  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.777167  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:54.777215  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:55.276948  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.277018  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:55.776048  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.776114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.776379  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.776084  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:57.276044  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.276370  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:57.276409  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:57.776074  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.776175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.776534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.276474  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.776435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:59.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.276461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:59.276501  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:59.776413  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.776495  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.776828  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.276523  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.276611  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.276928  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.776709  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.776788  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.777104  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:01.276864  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.276950  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.277320  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:01.277377  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:01.776897  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.776970  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.777279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.276047  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.276127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.776476  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:03.277070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.277155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.277407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:03.277449  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:03.776150  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.776586  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.276303  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.276387  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.276681  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.776711  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.776794  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.782759  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:42:05.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.276619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:05.776366  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.776442  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.776784  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:05.776836  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:06.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:06.776160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.776573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.276332  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.276414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.276772  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.776264  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.776337  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.776591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:08.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.276230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:08.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:08.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.776787  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.276425  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.776266  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:10.276403  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.276479  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.276767  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:10.276814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:10.776451  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.776520  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.776795  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.276636  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.276714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.277054  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.776915  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.776994  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.777329  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.276040  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.276407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.776109  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:12.776597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:13.277062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.277174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.277498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:13.776187  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.776262  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.276253  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.276688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.776496  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.776570  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.776890  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:14.776948  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:15.276665  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.276733  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.277042  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:15.776898  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.777305  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.276026  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.276107  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.776492  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:17.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.276185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:17.276576  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:17.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.776357  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.776675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.276109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:19.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:19.276619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:19.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.776483  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.776740  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.776163  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.776556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.276156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.776270  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:21.776630  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:22.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:22.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.776267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.276270  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.276346  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.276675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.776093  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:24.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:24.276482  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:24.776113  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.776187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.776592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:26.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:26.276597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:26.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.776223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.776559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.276255  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.276327  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.776352  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.776694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.776441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:28.776496  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:29.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.276214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:29.776193  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.776294  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.776633  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.276354  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.276958  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.776754  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.776886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.777216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:30.777271  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:31.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.276997  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.277353  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:31.776905  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.777239  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.276031  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.276129  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:33.276005  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.276326  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:33.276364  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:33.776056  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.776130  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.276252  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.276601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.776170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.776439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:35.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.276502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:35.276548  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:35.776412  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.776485  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.776805  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.276468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:37.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:37.276578  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:37.776073  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.776140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.276191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.276447  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.776154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.776231  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:39.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:40.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.776168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.776483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.776311  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.776412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:41.776800  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:42.276460  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.276533  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.276835  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:42.776147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.276274  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.776371  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:44.276399  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.276475  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.276774  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:44.276818  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:44.776823  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.776896  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.777260  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.277015  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.277165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.277467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.276290  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.276372  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.276755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.776351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.776696  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:46.776865  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:47.276162  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.276562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:47.776400  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.776503  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.276644  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.276723  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.276978  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.776820  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.776899  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.777234  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:48.777287  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:49.277045  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.277135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.277475  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:49.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.776484  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.276116  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.276446  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.776127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.776478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:51.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:51.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.776200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.276504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.776090  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.776470  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.276544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:53.776655  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:54.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:54.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.776393  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.776463  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.776718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:55.776760  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:56.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:56.776279  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.776355  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.276419  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.776121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:58.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:58.276716  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:58.776027  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.776099  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.776349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.276501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.776817  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.776902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.777233  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:00.277352  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.277456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.277768  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:00.277814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:00.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.776654  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.776901  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.776971  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.777244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.277063  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.277162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.277496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:02.776546  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:03.276099  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:03.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.276644  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.776637  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.776900  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:04.776951  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:05.276709  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:05.776977  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.777064  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.777431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.776100  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.776494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:07.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:07.276607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:07.776246  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.776316  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.276194  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.276266  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.276528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.776304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.776699  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:09.776757  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:10.276472  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.276560  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.276905  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:10.776647  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.776717  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.776986  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.276873  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.277209  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.777023  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.777098  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.777457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:11.777510  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:12.276089  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.276172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:12.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.276276  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.276351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.276678  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.776139  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:14.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.276507  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:14.276551  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:14.776223  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.776621  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.276486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:16.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:16.276663  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:16.776324  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.776396  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.776758  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.276146  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.776575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.276431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:18.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:19.276304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.276385  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.276748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:19.776687  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.776760  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.777008  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.276923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.277244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.777028  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.777103  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.777448  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:20.777499  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:21.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:21.776177  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.776259  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.776596  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.276311  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.276394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.276742  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.776321  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.776716  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:23.276454  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.276541  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.276962  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:23.277021  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:23.776836  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.776923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.777277  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.277022  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.277402  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.776322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.276424  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.276715  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.776566  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.776639  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.776913  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:25.776962  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:26.276795  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.276868  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.277314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:26.776043  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.776120  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.276458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:28.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.276267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:28.276648  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:28.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.776457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.276201  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.776229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.776584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:30.776585  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:31.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.276684  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:31.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.776434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.776375  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.776708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:32.776765  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.276143  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.276404  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:33.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.276299  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.276386  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.276745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.776389  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:35.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:35.276620  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:35.776302  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.776730  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.276513  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.776244  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.776582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.776217  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:37.776588  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:38.276170  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.276253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.276591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:38.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.776702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.276158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.776295  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.776693  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:39.776750  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.276217  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.276537  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:40.776079  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.776268  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.776641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:42.276112  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.276194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:42.276522  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:42.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.776576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.276319  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.276422  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.276770  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.776459  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.776529  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.776862  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:44.276624  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.276703  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.277019  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:44.277073  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:44.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.776964  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.777314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.276046  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.276131  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.276394  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.776391  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.776465  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.276420  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.276883  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.776662  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.776730  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.776998  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:46.777043  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:47.276766  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.276837  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.277173  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:47.776961  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.777038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.777378  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.277033  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.277382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.776065  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.776137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.776471  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:49.276093  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:49.276562  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:49.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.276235  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.276637  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.276152  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.276423  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:51.776605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:52.276271  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.276672  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:52.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.776729  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.276218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.776158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.776239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:54.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:54.276664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:54.776326  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.776403  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.776723  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.276353  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.276436  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.276747  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.776688  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.776759  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.777015  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:56.276831  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.276902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.277216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:56.277268  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:56.776881  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.776955  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.777297  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.276004  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.276075  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.276309  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.776346  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.776677  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:58.776715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:59.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.276197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:59.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.776524  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.776869  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.276462  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.276555  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.276854  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.776547  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.776618  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.776940  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:00.776993  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:01.276491  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.276575  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.276927  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:01.776492  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.776564  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.776833  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.276163  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.276584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.776165  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:03.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:03.276467  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:03.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.276109  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.776096  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:05.276414  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.276498  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.276881  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:05.276923  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:05.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.776194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.276351  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.276427  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.276786  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.276368  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.276703  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:07.776509  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:08.276197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.276613  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:08.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.776428  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.776751  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.276440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.776287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.776623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:09.776680  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:10.276196  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.276274  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.276577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:10.776259  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.776330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.776668  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.276219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.776276  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.776353  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.776679  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:11.776729  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:12.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:12.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.776183  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.276219  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.276630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.776971  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.777044  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.777316  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:13.777359  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:14.276035  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.276110  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:14.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.776211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.776155  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.776242  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.776630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:16.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.276312  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:16.276712  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:16.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.776158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.276169  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.776291  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.776701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.276007  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.276319  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.776002  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.776084  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:18.776517  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:19.276181  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.276257  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:19.776049  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.776119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.776371  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:20.776581  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:21.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.276174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:21.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.776659  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.276365  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.276438  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.276776  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.776225  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.776811  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:22.776918  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:23.276742  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.276818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.277175  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:23.777069  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.777161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.777559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.276175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.776173  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:25.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.276345  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.276694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:25.276749  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:25.776415  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.776487  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.776789  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.276167  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.776128  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:27.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:28.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.776254  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.776579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.276542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.776485  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.776594  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.776923  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:29.776981  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:30.276085  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:30.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.776542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.276152  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.276609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:32.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:32.276593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:32.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.776213  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.776635  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:34.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:34.276654  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:34.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.776308  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.776391  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.776737  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.276100  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.276170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.276424  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.776558  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:36.776623  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:37.276145  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:37.776081  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.776159  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.276595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.776175  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.776258  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:38.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:39.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.276167  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:39.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.276186  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.276264  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.276597  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.776207  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.776284  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:41.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.276518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:41.276567  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:41.776214  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.776290  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.776631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.276210  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.276285  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:43.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:43.276715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:43.776156  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.776564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.276236  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.276330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.276658  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.776699  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.776770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.777048  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:45.276733  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.276817  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.277141  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:45.277194  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:45.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.776557  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.276336  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.276649  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.776440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.276545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.776314  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:47.776761  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:48.276091  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:48.776114  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.276135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.276211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.276541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.776067  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.776138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.776462  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:50.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:50.276626  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:50.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.776099  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.776010  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.776087  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.776341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:52.776389  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:53.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.276192  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:53.776263  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.776669  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.276617  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.776724  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:54.777107  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:55.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.276974  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.277307  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:55.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.776673  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.276245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.776700  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:57.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.276411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:57.276450  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:57.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.276629  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.776188  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:59.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:59.276596  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:59.776249  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.776665  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.276382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.276785  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.776579  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.776666  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.777193  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.776510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:01.776573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:02.276144  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:02.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.776588  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.276207  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.276642  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.776382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.776473  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.776816  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:03.776873  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:04.276616  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.276687  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.276947  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:04.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.776956  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.777296  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.276052  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.276135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.276493  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.776076  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.776382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:06.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:06.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:06.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.776318  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.776607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.276647  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.776505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:08.276124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.276525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:08.276582  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:08.776068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.776427  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.776450  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.776528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.776851  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:10.276606  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.276677  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.277000  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:10.277057  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:10.776657  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.776732  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.276811  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.276882  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.277223  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.776851  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.776931  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.777196  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:12.276964  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.277038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.277388  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:12.277445  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:12.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.776553  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.276228  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.276604  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.776145  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.776458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:14.776508  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:15.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.276200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.276794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:15.776653  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.776744  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.777091  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.276715  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.276782  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.277064  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.776929  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.777011  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.777376  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:16.777433  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:17.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.276186  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.276483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:17.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:19.276077  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.276409  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:19.276457  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:19.776140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.276265  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.276339  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.276676  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.776280  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.776606  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:21.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.276210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:21.276636  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:21.776310  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.776390  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.776682  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.276070  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.276144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:23.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.276321  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.276652  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:23.276714  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:23.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.776500  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.276572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.776624  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.776995  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:25.276720  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.276795  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.277096  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:25.277138  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:25.777018  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.777094  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.777486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.776075  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.776258  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.776680  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:27.776737  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:28.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.276623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:28.776331  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.776707  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.276416  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.276493  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.276818  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.776639  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.776714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.776980  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:29.777029  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:30.276781  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.276856  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.277201  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:30.776871  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.776952  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.777288  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.277017  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.277360  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.776747  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.776819  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.777132  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:31.777186  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:32.276950  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.277023  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.277345  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:32.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.776177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.776473  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.276576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.776686  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:34.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.276462  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.276731  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:34.276780  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:34.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.776596  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.776911  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.276784  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.276862  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.277181  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.776969  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.777301  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:36.277066  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.277146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.277501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:36.277569  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:36.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.776185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.276163  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:38.776483  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:39.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.276555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:39.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.776488  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.776826  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.276588  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.276663  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.276937  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.776794  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.776875  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.777212  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:40.777264  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:41.277035  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.277114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.277443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:41.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.776176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.276209  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.276666  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.776161  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.776562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:43.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:43.276647  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:43.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.276530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.776148  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:45.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.276708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:45.276763  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:45.776444  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.776519  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.776847  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.276602  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.276676  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.276921  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.776673  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.776790  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.777114  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:47.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:47.277302  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:47.776975  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.777051  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.777338  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.276041  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.276118  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.276410  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.776030  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.776109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.776395  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.276033  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.276104  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.276393  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:49.776593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:50.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:50.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.776209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:52.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:52.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:52.776182  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.776256  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.776572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.776498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:54.276193  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.276592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:54.276649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:54.776395  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.776470  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.276132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.276389  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.776213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:56.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.276656  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:56.276710  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:56.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.776281  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.776602  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.276123  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.276534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.776290  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.776381  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.776755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.276054  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.276133  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.276434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:58.776554  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:59.276223  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:59.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.276689  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.276784  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.277182  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.776974  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.777053  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.777397  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:00.777455  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:01.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.276181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:01.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.276247  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.276641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:03.276061  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.276138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:03.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:03.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.276265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.776433  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.776505  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.776849  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:05.276666  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.276770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:05.277147  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:05.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:06.276074  480112 node_ready.go:38] duration metric: took 6m0.000169865s for node "functional-787602" to be "Ready" ...
	I1205 06:46:06.279558  480112 out.go:203] 
	W1205 06:46:06.282535  480112 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:46:06.282557  480112 out.go:285] * 
	W1205 06:46:06.284719  480112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:46:06.287525  480112 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289522924Z" level=info msg="Using the internal default seccomp profile"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289531491Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289537603Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289543675Z" level=info msg="RDT not available in the host system"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289557288Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.290430521Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.290452134Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.290478998Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291097926Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291117701Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291242224Z" level=info msg="Updated default CNI network name to "
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291946561Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.292315246Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.29242024Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.33570484Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.335957142Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336033607Z" level=info msg="Create NRI interface"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336191582Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336206983Z" level=info msg="runtime interface created"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336225494Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336231804Z" level=info msg="runtime interface starting up..."
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336238196Z" level=info msg="starting plugins..."
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336255049Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336317647Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:40:03 functional-787602 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:08.164058    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:08.164711    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:08.166492    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:08.167067    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:08.168868    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:46:08 up  3:28,  0 user,  load average: 0.35, 0.22, 0.48
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:46:05 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:06 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 05 06:46:06 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:06 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:06 functional-787602 kubelet[9166]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:06 functional-787602 kubelet[9166]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:06 functional-787602 kubelet[9166]: E1205 06:46:06.585581    9166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:06 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:06 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:07 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 05 06:46:07 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:07 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:07 functional-787602 kubelet[9187]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:07 functional-787602 kubelet[9187]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:07 functional-787602 kubelet[9187]: E1205 06:46:07.329037    9187 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:07 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:07 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:08 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1133.
	Dec 05 06:46:08 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:08 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:08 functional-787602 kubelet[9256]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:08 functional-787602 kubelet[9256]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:08 functional-787602 kubelet[9256]: E1205 06:46:08.088003    9256 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:08 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:08 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (361.672791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-787602 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-787602 get po -A: exit status 1 (57.056262ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-787602 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-787602 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-787602 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (329.228419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-787602 logs -n 25: (1.033461934s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-252233 image save kicbase/echo-server:functional-252233 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image rm kicbase/echo-server:functional-252233 --alsologtostderr                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls                                                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/test/nested/copy/444147/hosts                                                                                         │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image save --daemon kicbase/echo-server:functional-252233 --alsologtostderr                                                             │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/444147.pem                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /usr/share/ca-certificates/444147.pem                                                                                      │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/4441472.pem                                                                                                 │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /usr/share/ca-certificates/4441472.pem                                                                                     │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ update-context │ functional-252233 update-context --alsologtostderr -v=2                                                                                                   │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format short --alsologtostderr                                                                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh            │ functional-252233 ssh pgrep buildkitd                                                                                                                     │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ image          │ functional-252233 image ls --format yaml --alsologtostderr                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr                                                    │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format json --alsologtostderr                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls --format table --alsologtostderr                                                                                               │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image          │ functional-252233 image ls                                                                                                                                │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ delete         │ -p functional-252233                                                                                                                                      │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ start          │ -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ start          │ -p functional-787602 --alsologtostderr -v=8                                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:39 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:39:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:39:59.523609  480112 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:39:59.523793  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.523816  480112 out.go:374] Setting ErrFile to fd 2...
	I1205 06:39:59.523837  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.524220  480112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:39:59.524681  480112 out.go:368] Setting JSON to false
	I1205 06:39:59.525943  480112 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12127,"bootTime":1764904673,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:39:59.526021  480112 start.go:143] virtualization:  
	I1205 06:39:59.529485  480112 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:39:59.533299  480112 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:39:59.533430  480112 notify.go:221] Checking for updates...
	I1205 06:39:59.539032  480112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:39:59.542038  480112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:39:59.544821  480112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:39:59.547558  480112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:39:59.550303  480112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:39:59.553653  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:39:59.553793  480112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:39:59.587101  480112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:39:59.587209  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.647016  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.637315829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.647121  480112 docker.go:319] overlay module found
	I1205 06:39:59.650323  480112 out.go:179] * Using the docker driver based on existing profile
	I1205 06:39:59.653400  480112 start.go:309] selected driver: docker
	I1205 06:39:59.653426  480112 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.653516  480112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:39:59.653622  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.713012  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.702941112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.713548  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:39:59.713621  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:39:59.713678  480112 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.716888  480112 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:39:59.719675  480112 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:39:59.722682  480112 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:39:59.725781  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:39:59.725946  480112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:39:59.745247  480112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:39:59.745269  480112 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:39:59.798316  480112 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:40:00.046313  480112 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:40:00.046504  480112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:40:00.046814  480112 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:40:00.046857  480112 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.046933  480112 start.go:364] duration metric: took 43.709µs to acquireMachinesLock for "functional-787602"
	I1205 06:40:00.046950  480112 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:40:00.046969  480112 fix.go:54] fixHost starting: 
	I1205 06:40:00.047287  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:00.049366  480112 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049539  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:40:00.049567  480112 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 224.085µs
	I1205 06:40:00.049597  480112 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:40:00.049636  480112 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049702  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:40:00.049722  480112 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 89.733µs
	I1205 06:40:00.050277  480112 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:40:00.050353  480112 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051458  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:40:00.051500  480112 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.155091ms
	I1205 06:40:00.051529  480112 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051582  480112 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051659  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:40:00.051680  480112 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 114.34µs
	I1205 06:40:00.051702  480112 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051741  480112 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051791  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:40:00.051822  480112 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 83.054µs
	I1205 06:40:00.063751  480112 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064371  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:40:00.064388  480112 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 658.756µs
	I1205 06:40:00.064400  480112 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:40:00.064453  480112 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064504  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:40:00.064510  480112 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 92.349µs
	I1205 06:40:00.064516  480112 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:40:00.064532  480112 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.067976  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:40:00.068029  480112 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 3.495239ms
	I1205 06:40:00.068074  480112 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:40:00.058631  480112 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:40:00.068155  480112 cache.go:87] Successfully saved all images to host disk.
	I1205 06:40:00.156134  480112 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:40:00.156177  480112 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:40:00.160840  480112 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:40:00.160889  480112 machine.go:94] provisionDockerMachine start ...
	I1205 06:40:00.161003  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.232523  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.232876  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.232886  480112 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:40:00.484459  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.484485  480112 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:40:00.484571  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.540991  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.541328  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.541341  480112 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:40:00.761314  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.761404  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.782315  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.782666  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.782689  480112 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:40:00.934901  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:40:00.934930  480112 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:40:00.935005  480112 ubuntu.go:190] setting up certificates
	I1205 06:40:00.935016  480112 provision.go:84] configureAuth start
	I1205 06:40:00.935097  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:00.952439  480112 provision.go:143] copyHostCerts
	I1205 06:40:00.952486  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952527  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:40:00.952543  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952619  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:40:00.952705  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952727  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:40:00.952737  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952765  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:40:00.952809  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952828  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:40:00.952837  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952861  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:40:00.952911  480112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:40:01.160028  480112 provision.go:177] copyRemoteCerts
	I1205 06:40:01.160150  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:40:01.160201  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.184354  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.295740  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:40:01.295812  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:40:01.316925  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:40:01.316986  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:40:01.339507  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:40:01.339574  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:40:01.358710  480112 provision.go:87] duration metric: took 423.67042ms to configureAuth
	I1205 06:40:01.358788  480112 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:40:01.358981  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:01.359104  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.377010  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:01.377340  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:01.377360  480112 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:40:01.723262  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:40:01.723303  480112 machine.go:97] duration metric: took 1.56238873s to provisionDockerMachine
	I1205 06:40:01.723316  480112 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:40:01.723329  480112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:40:01.723398  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:40:01.723446  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.742177  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.847102  480112 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:40:01.850854  480112 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:40:01.850880  480112 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:40:01.850885  480112 command_runner.go:130] > VERSION_ID="12"
	I1205 06:40:01.850889  480112 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:40:01.850897  480112 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:40:01.850901  480112 command_runner.go:130] > ID=debian
	I1205 06:40:01.850906  480112 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:40:01.850910  480112 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:40:01.850918  480112 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:40:01.850955  480112 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:40:01.850978  480112 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:40:01.850990  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:40:01.851049  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:40:01.851138  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:40:01.851149  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /etc/ssl/certs/4441472.pem
	I1205 06:40:01.851230  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:40:01.851237  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> /etc/test/nested/copy/444147/hosts
	I1205 06:40:01.851282  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:40:01.859516  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:01.879483  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:40:01.898655  480112 start.go:296] duration metric: took 175.324245ms for postStartSetup
	I1205 06:40:01.898744  480112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:40:01.898799  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.917838  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.020238  480112 command_runner.go:130] > 18%
	I1205 06:40:02.020354  480112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:40:02.025815  480112 command_runner.go:130] > 160G
	I1205 06:40:02.026493  480112 fix.go:56] duration metric: took 1.979519007s for fixHost
	I1205 06:40:02.026516  480112 start.go:83] releasing machines lock for "functional-787602", held for 1.979574696s
	I1205 06:40:02.026587  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:02.046979  480112 ssh_runner.go:195] Run: cat /version.json
	I1205 06:40:02.047030  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.047280  480112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:40:02.047345  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.081102  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.085747  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.189932  480112 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:40:02.190072  480112 ssh_runner.go:195] Run: systemctl --version
	I1205 06:40:02.280062  480112 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:40:02.282950  480112 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:40:02.282989  480112 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:40:02.283061  480112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:40:02.319896  480112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:40:02.324212  480112 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:40:02.324374  480112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:40:02.324444  480112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:40:02.332670  480112 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:40:02.332736  480112 start.go:496] detecting cgroup driver to use...
	I1205 06:40:02.332774  480112 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:40:02.332831  480112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:40:02.348502  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:40:02.361851  480112 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:40:02.361926  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:40:02.380602  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:40:02.393710  480112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:40:02.522109  480112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:40:02.655884  480112 docker.go:234] disabling docker service ...
	I1205 06:40:02.655958  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:40:02.673330  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:40:02.687649  480112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:40:02.802223  480112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:40:02.930343  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:40:02.944017  480112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:40:02.956898  480112 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 06:40:02.958122  480112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:40:02.958248  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.967567  480112 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:40:02.967712  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.976781  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.985897  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.994984  480112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:40:03.003975  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.013874  480112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.022919  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.032163  480112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:40:03.038816  480112 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:40:03.039990  480112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:40:03.049427  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.175291  480112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:40:03.341374  480112 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:40:03.341477  480112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:40:03.345425  480112 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 06:40:03.345448  480112 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:40:03.345464  480112 command_runner.go:130] > Device: 0,73	Inode: 1755        Links: 1
	I1205 06:40:03.345472  480112 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:03.345477  480112 command_runner.go:130] > Access: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345484  480112 command_runner.go:130] > Modify: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345489  480112 command_runner.go:130] > Change: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345493  480112 command_runner.go:130] >  Birth: -
	I1205 06:40:03.345525  480112 start.go:564] Will wait 60s for crictl version
	I1205 06:40:03.345579  480112 ssh_runner.go:195] Run: which crictl
	I1205 06:40:03.348931  480112 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:40:03.349401  480112 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:40:03.373825  480112 command_runner.go:130] > Version:  0.1.0
	I1205 06:40:03.373849  480112 command_runner.go:130] > RuntimeName:  cri-o
	I1205 06:40:03.373973  480112 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1205 06:40:03.374159  480112 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:40:03.376168  480112 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:40:03.376252  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.403613  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.403690  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.403710  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.403727  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.403756  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.403777  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.403795  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.403813  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.403844  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.403865  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.403879  480112 command_runner.go:130] >      static
	I1205 06:40:03.403895  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.403924  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.403945  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.403964  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.403979  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.404006  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.404027  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.404044  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.404059  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.406234  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.432776  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.432811  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.432836  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.432843  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.432849  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.432862  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.432872  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.432877  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.432886  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.432908  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.432916  480112 command_runner.go:130] >      static
	I1205 06:40:03.432920  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.432948  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.432956  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.432959  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.432963  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.432970  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.432998  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.433006  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.433010  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.440242  480112 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:40:03.443151  480112 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:40:03.459691  480112 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:40:03.463610  480112 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:40:03.463748  480112 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:40:03.463853  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:40:03.463910  480112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:40:03.497207  480112 command_runner.go:130] > {
	I1205 06:40:03.497226  480112 command_runner.go:130] >   "images":  [
	I1205 06:40:03.497231  480112 command_runner.go:130] >     {
	I1205 06:40:03.497239  480112 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:40:03.497244  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497250  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:40:03.497253  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497257  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497267  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1205 06:40:03.497271  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497276  480112 command_runner.go:130] >       "size":  "29035622",
	I1205 06:40:03.497279  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497283  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497286  480112 command_runner.go:130] >     },
	I1205 06:40:03.497290  480112 command_runner.go:130] >     {
	I1205 06:40:03.497297  480112 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:40:03.497301  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497306  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:40:03.497309  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497313  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497321  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1205 06:40:03.497324  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497328  480112 command_runner.go:130] >       "size":  "74488375",
	I1205 06:40:03.497332  480112 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:40:03.497336  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497340  480112 command_runner.go:130] >     },
	I1205 06:40:03.497343  480112 command_runner.go:130] >     {
	I1205 06:40:03.497350  480112 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:40:03.497354  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497359  480112 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:40:03.497362  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497366  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497388  480112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1205 06:40:03.497393  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497397  480112 command_runner.go:130] >       "size":  "60854229",
	I1205 06:40:03.497401  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497405  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497409  480112 command_runner.go:130] >       },
	I1205 06:40:03.497413  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497417  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497421  480112 command_runner.go:130] >     },
	I1205 06:40:03.497424  480112 command_runner.go:130] >     {
	I1205 06:40:03.497430  480112 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:40:03.497434  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497439  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:40:03.497442  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497446  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497454  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1205 06:40:03.497459  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497463  480112 command_runner.go:130] >       "size":  "84947242",
	I1205 06:40:03.497466  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497469  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497473  480112 command_runner.go:130] >       },
	I1205 06:40:03.497476  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497480  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497483  480112 command_runner.go:130] >     },
	I1205 06:40:03.497486  480112 command_runner.go:130] >     {
	I1205 06:40:03.497492  480112 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:40:03.497496  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497501  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:40:03.497505  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497509  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497517  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1205 06:40:03.497520  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497529  480112 command_runner.go:130] >       "size":  "72167568",
	I1205 06:40:03.497539  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497542  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497545  480112 command_runner.go:130] >       },
	I1205 06:40:03.497549  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497552  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497555  480112 command_runner.go:130] >     },
	I1205 06:40:03.497558  480112 command_runner.go:130] >     {
	I1205 06:40:03.497564  480112 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:40:03.497568  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497573  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:40:03.497575  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497579  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497588  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1205 06:40:03.497592  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497595  480112 command_runner.go:130] >       "size":  "74105124",
	I1205 06:40:03.497599  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497603  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497606  480112 command_runner.go:130] >     },
	I1205 06:40:03.497609  480112 command_runner.go:130] >     {
	I1205 06:40:03.497615  480112 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:40:03.497618  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497624  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:40:03.497627  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497630  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497638  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1205 06:40:03.497641  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497645  480112 command_runner.go:130] >       "size":  "49819792",
	I1205 06:40:03.497648  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497652  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497655  480112 command_runner.go:130] >       },
	I1205 06:40:03.497659  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497663  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497666  480112 command_runner.go:130] >     },
	I1205 06:40:03.497672  480112 command_runner.go:130] >     {
	I1205 06:40:03.497679  480112 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:40:03.497683  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497687  480112 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.497690  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497694  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497701  480112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1205 06:40:03.497705  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497708  480112 command_runner.go:130] >       "size":  "517328",
	I1205 06:40:03.497712  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497715  480112 command_runner.go:130] >         "value":  "65535"
	I1205 06:40:03.497718  480112 command_runner.go:130] >       },
	I1205 06:40:03.497722  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497726  480112 command_runner.go:130] >       "pinned":  true
	I1205 06:40:03.497729  480112 command_runner.go:130] >     }
	I1205 06:40:03.497732  480112 command_runner.go:130] >   ]
	I1205 06:40:03.497735  480112 command_runner.go:130] > }
	I1205 06:40:03.499390  480112 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:40:03.499408  480112 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:40:03.499417  480112 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:40:03.499515  480112 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:40:03.499587  480112 ssh_runner.go:195] Run: crio config
	I1205 06:40:03.548638  480112 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 06:40:03.548661  480112 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 06:40:03.548669  480112 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 06:40:03.548671  480112 command_runner.go:130] > #
	I1205 06:40:03.548686  480112 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 06:40:03.548693  480112 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 06:40:03.548700  480112 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 06:40:03.548716  480112 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 06:40:03.548720  480112 command_runner.go:130] > # reload'.
	I1205 06:40:03.548726  480112 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 06:40:03.548733  480112 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 06:40:03.548739  480112 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 06:40:03.548745  480112 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 06:40:03.548748  480112 command_runner.go:130] > [crio]
	I1205 06:40:03.548755  480112 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 06:40:03.548760  480112 command_runner.go:130] > # containers images, in this directory.
	I1205 06:40:03.549179  480112 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 06:40:03.549226  480112 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 06:40:03.549246  480112 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1205 06:40:03.549268  480112 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 06:40:03.549287  480112 command_runner.go:130] > # imagestore = ""
	I1205 06:40:03.549306  480112 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 06:40:03.549324  480112 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 06:40:03.549341  480112 command_runner.go:130] > # storage_driver = "overlay"
	I1205 06:40:03.549356  480112 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 06:40:03.549385  480112 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 06:40:03.549402  480112 command_runner.go:130] > # storage_option = [
	I1205 06:40:03.549417  480112 command_runner.go:130] > # ]
	I1205 06:40:03.549435  480112 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 06:40:03.549461  480112 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 06:40:03.549487  480112 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 06:40:03.549504  480112 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 06:40:03.549521  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 06:40:03.549545  480112 command_runner.go:130] > # always happen on a node reboot
	I1205 06:40:03.549737  480112 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 06:40:03.549768  480112 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 06:40:03.549775  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 06:40:03.549781  480112 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 06:40:03.549785  480112 command_runner.go:130] > # version_file_persist = ""
	I1205 06:40:03.549793  480112 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 06:40:03.549801  480112 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 06:40:03.549805  480112 command_runner.go:130] > # internal_wipe = true
	I1205 06:40:03.549813  480112 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 06:40:03.549818  480112 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 06:40:03.549822  480112 command_runner.go:130] > # internal_repair = true
	I1205 06:40:03.549828  480112 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 06:40:03.549834  480112 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 06:40:03.549840  480112 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 06:40:03.549845  480112 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 06:40:03.549854  480112 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 06:40:03.549858  480112 command_runner.go:130] > [crio.api]
	I1205 06:40:03.549863  480112 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 06:40:03.549867  480112 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 06:40:03.549872  480112 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 06:40:03.549876  480112 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 06:40:03.549883  480112 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 06:40:03.549889  480112 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 06:40:03.549892  480112 command_runner.go:130] > # stream_port = "0"
	I1205 06:40:03.549897  480112 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 06:40:03.549901  480112 command_runner.go:130] > # stream_enable_tls = false
	I1205 06:40:03.549907  480112 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 06:40:03.549911  480112 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 06:40:03.549917  480112 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 06:40:03.549923  480112 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549927  480112 command_runner.go:130] > # stream_tls_cert = ""
	I1205 06:40:03.549933  480112 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 06:40:03.549939  480112 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549942  480112 command_runner.go:130] > # stream_tls_key = ""
	I1205 06:40:03.549948  480112 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 06:40:03.549954  480112 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 06:40:03.549958  480112 command_runner.go:130] > # automatically pick up the changes.
	I1205 06:40:03.549962  480112 command_runner.go:130] > # stream_tls_ca = ""
	I1205 06:40:03.549979  480112 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549984  480112 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 06:40:03.549991  480112 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549996  480112 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 06:40:03.550002  480112 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 06:40:03.550007  480112 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 06:40:03.550010  480112 command_runner.go:130] > [crio.runtime]
	I1205 06:40:03.550016  480112 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 06:40:03.550021  480112 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 06:40:03.550025  480112 command_runner.go:130] > # "nofile=1024:2048"
	I1205 06:40:03.550034  480112 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 06:40:03.550038  480112 command_runner.go:130] > # default_ulimits = [
	I1205 06:40:03.550041  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550047  480112 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 06:40:03.550050  480112 command_runner.go:130] > # no_pivot = false
	I1205 06:40:03.550056  480112 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 06:40:03.550062  480112 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 06:40:03.550067  480112 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 06:40:03.550072  480112 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 06:40:03.550077  480112 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 06:40:03.550084  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550087  480112 command_runner.go:130] > # conmon = ""
	I1205 06:40:03.550092  480112 command_runner.go:130] > # Cgroup setting for conmon
	I1205 06:40:03.550099  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 06:40:03.550102  480112 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 06:40:03.550108  480112 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 06:40:03.550115  480112 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 06:40:03.550124  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550128  480112 command_runner.go:130] > # conmon_env = [
	I1205 06:40:03.550130  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550136  480112 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 06:40:03.550141  480112 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 06:40:03.550146  480112 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 06:40:03.550150  480112 command_runner.go:130] > # default_env = [
	I1205 06:40:03.550152  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550158  480112 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 06:40:03.550165  480112 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 06:40:03.550169  480112 command_runner.go:130] > # selinux = false
	I1205 06:40:03.550180  480112 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 06:40:03.550188  480112 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1205 06:40:03.550193  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550197  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.550202  480112 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1205 06:40:03.550212  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550216  480112 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1205 06:40:03.550223  480112 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 06:40:03.550229  480112 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 06:40:03.550235  480112 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 06:40:03.550241  480112 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 06:40:03.550246  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550250  480112 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 06:40:03.550255  480112 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 06:40:03.550259  480112 command_runner.go:130] > # the cgroup blockio controller.
	I1205 06:40:03.550263  480112 command_runner.go:130] > # blockio_config_file = ""
	I1205 06:40:03.550269  480112 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 06:40:03.550273  480112 command_runner.go:130] > # blockio parameters.
	I1205 06:40:03.550277  480112 command_runner.go:130] > # blockio_reload = false
	I1205 06:40:03.550284  480112 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 06:40:03.550287  480112 command_runner.go:130] > # irqbalance daemon.
	I1205 06:40:03.550292  480112 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 06:40:03.550298  480112 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 06:40:03.550305  480112 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 06:40:03.550313  480112 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 06:40:03.550319  480112 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 06:40:03.550325  480112 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 06:40:03.550330  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550333  480112 command_runner.go:130] > # rdt_config_file = ""
	I1205 06:40:03.550338  480112 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 06:40:03.550342  480112 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 06:40:03.550348  480112 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 06:40:03.550711  480112 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 06:40:03.550724  480112 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 06:40:03.550731  480112 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 06:40:03.550734  480112 command_runner.go:130] > # will be added.
	I1205 06:40:03.550738  480112 command_runner.go:130] > # default_capabilities = [
	I1205 06:40:03.550742  480112 command_runner.go:130] > # 	"CHOWN",
	I1205 06:40:03.550746  480112 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 06:40:03.550749  480112 command_runner.go:130] > # 	"FSETID",
	I1205 06:40:03.550752  480112 command_runner.go:130] > # 	"FOWNER",
	I1205 06:40:03.550756  480112 command_runner.go:130] > # 	"SETGID",
	I1205 06:40:03.550759  480112 command_runner.go:130] > # 	"SETUID",
	I1205 06:40:03.550782  480112 command_runner.go:130] > # 	"SETPCAP",
	I1205 06:40:03.550786  480112 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 06:40:03.550789  480112 command_runner.go:130] > # 	"KILL",
	I1205 06:40:03.550792  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550800  480112 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 06:40:03.550810  480112 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 06:40:03.550815  480112 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 06:40:03.550821  480112 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 06:40:03.550827  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550831  480112 command_runner.go:130] > default_sysctls = [
	I1205 06:40:03.550835  480112 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 06:40:03.550838  480112 command_runner.go:130] > ]
	I1205 06:40:03.550842  480112 command_runner.go:130] > # List of devices on the host that a
	I1205 06:40:03.550849  480112 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 06:40:03.550852  480112 command_runner.go:130] > # allowed_devices = [
	I1205 06:40:03.550856  480112 command_runner.go:130] > # 	"/dev/fuse",
	I1205 06:40:03.550859  480112 command_runner.go:130] > # 	"/dev/net/tun",
	I1205 06:40:03.550863  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550867  480112 command_runner.go:130] > # List of additional devices. specified as
	I1205 06:40:03.550875  480112 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 06:40:03.550880  480112 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 06:40:03.550886  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550889  480112 command_runner.go:130] > # additional_devices = [
	I1205 06:40:03.550894  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550899  480112 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 06:40:03.550905  480112 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 06:40:03.550909  480112 command_runner.go:130] > # 	"/etc/cdi",
	I1205 06:40:03.550912  480112 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 06:40:03.550915  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550921  480112 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 06:40:03.550927  480112 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 06:40:03.550931  480112 command_runner.go:130] > # Defaults to false.
	I1205 06:40:03.550936  480112 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 06:40:03.550942  480112 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 06:40:03.550949  480112 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 06:40:03.550952  480112 command_runner.go:130] > # hooks_dir = [
	I1205 06:40:03.550956  480112 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 06:40:03.550962  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550972  480112 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 06:40:03.550979  480112 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 06:40:03.550984  480112 command_runner.go:130] > # its default mounts from the following two files:
	I1205 06:40:03.550987  480112 command_runner.go:130] > #
	I1205 06:40:03.550993  480112 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 06:40:03.550999  480112 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 06:40:03.551004  480112 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 06:40:03.551007  480112 command_runner.go:130] > #
	I1205 06:40:03.551013  480112 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 06:40:03.551019  480112 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 06:40:03.551025  480112 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 06:40:03.551030  480112 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 06:40:03.551032  480112 command_runner.go:130] > #
	I1205 06:40:03.551036  480112 command_runner.go:130] > # default_mounts_file = ""
	I1205 06:40:03.551041  480112 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 06:40:03.551047  480112 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 06:40:03.551051  480112 command_runner.go:130] > # pids_limit = -1
	I1205 06:40:03.551057  480112 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 06:40:03.551063  480112 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 06:40:03.551069  480112 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 06:40:03.551077  480112 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 06:40:03.551080  480112 command_runner.go:130] > # log_size_max = -1
	I1205 06:40:03.551087  480112 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 06:40:03.551091  480112 command_runner.go:130] > # log_to_journald = false
	I1205 06:40:03.551098  480112 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 06:40:03.551103  480112 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 06:40:03.551108  480112 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 06:40:03.551113  480112 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 06:40:03.551118  480112 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 06:40:03.551121  480112 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 06:40:03.551127  480112 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 06:40:03.551131  480112 command_runner.go:130] > # read_only = false
	I1205 06:40:03.551137  480112 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 06:40:03.551147  480112 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 06:40:03.551151  480112 command_runner.go:130] > # live configuration reload.
	I1205 06:40:03.551154  480112 command_runner.go:130] > # log_level = "info"
	I1205 06:40:03.551160  480112 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 06:40:03.551164  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.551168  480112 command_runner.go:130] > # log_filter = ""
	I1205 06:40:03.551174  480112 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551180  480112 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 06:40:03.551184  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551192  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551196  480112 command_runner.go:130] > # uid_mappings = ""
	I1205 06:40:03.551201  480112 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551208  480112 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 06:40:03.551212  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551219  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551223  480112 command_runner.go:130] > # gid_mappings = ""
	I1205 06:40:03.551229  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 06:40:03.551235  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551241  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551249  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551253  480112 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 06:40:03.551259  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 06:40:03.551264  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551271  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551278  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551282  480112 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 06:40:03.551288  480112 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 06:40:03.551296  480112 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 06:40:03.551302  480112 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 06:40:03.551306  480112 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 06:40:03.551311  480112 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 06:40:03.551317  480112 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 06:40:03.551322  480112 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 06:40:03.551330  480112 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 06:40:03.551333  480112 command_runner.go:130] > # drop_infra_ctr = true
	I1205 06:40:03.551340  480112 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 06:40:03.551346  480112 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 06:40:03.551353  480112 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 06:40:03.551357  480112 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 06:40:03.551364  480112 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 06:40:03.551370  480112 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 06:40:03.551375  480112 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 06:40:03.551380  480112 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 06:40:03.551384  480112 command_runner.go:130] > # shared_cpuset = ""
	I1205 06:40:03.551390  480112 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 06:40:03.551395  480112 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 06:40:03.551398  480112 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 06:40:03.551405  480112 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 06:40:03.551408  480112 command_runner.go:130] > # pinns_path = ""
	I1205 06:40:03.551414  480112 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 06:40:03.551420  480112 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 06:40:03.551424  480112 command_runner.go:130] > # enable_criu_support = true
	I1205 06:40:03.551428  480112 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 06:40:03.551434  480112 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 06:40:03.551438  480112 command_runner.go:130] > # enable_pod_events = false
	I1205 06:40:03.551444  480112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 06:40:03.551449  480112 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 06:40:03.551453  480112 command_runner.go:130] > # default_runtime = "crun"
	I1205 06:40:03.551458  480112 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 06:40:03.551466  480112 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 06:40:03.551475  480112 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 06:40:03.551480  480112 command_runner.go:130] > # creation as a file is not desired either.
	I1205 06:40:03.551488  480112 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 06:40:03.551495  480112 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 06:40:03.551499  480112 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 06:40:03.551502  480112 command_runner.go:130] > # ]
	I1205 06:40:03.551511  480112 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 06:40:03.551518  480112 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 06:40:03.551524  480112 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 06:40:03.551528  480112 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 06:40:03.551532  480112 command_runner.go:130] > #
	I1205 06:40:03.551536  480112 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 06:40:03.551541  480112 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 06:40:03.551544  480112 command_runner.go:130] > # runtime_type = "oci"
	I1205 06:40:03.551549  480112 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 06:40:03.551553  480112 command_runner.go:130] > # inherit_default_runtime = false
	I1205 06:40:03.551558  480112 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 06:40:03.551562  480112 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 06:40:03.551566  480112 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 06:40:03.551570  480112 command_runner.go:130] > # monitor_env = []
	I1205 06:40:03.551574  480112 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 06:40:03.551578  480112 command_runner.go:130] > # allowed_annotations = []
	I1205 06:40:03.551583  480112 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 06:40:03.551587  480112 command_runner.go:130] > # no_sync_log = false
	I1205 06:40:03.551590  480112 command_runner.go:130] > # default_annotations = {}
	I1205 06:40:03.551594  480112 command_runner.go:130] > # stream_websockets = false
	I1205 06:40:03.551598  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.551631  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.551636  480112 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 06:40:03.551643  480112 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 06:40:03.551649  480112 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 06:40:03.551656  480112 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 06:40:03.551659  480112 command_runner.go:130] > #   in $PATH.
	I1205 06:40:03.551665  480112 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 06:40:03.551669  480112 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 06:40:03.551675  480112 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 06:40:03.551678  480112 command_runner.go:130] > #   state.
	I1205 06:40:03.551685  480112 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 06:40:03.551690  480112 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 06:40:03.551699  480112 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1205 06:40:03.551706  480112 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1205 06:40:03.551711  480112 command_runner.go:130] > #   the values from the default runtime on load time.
	I1205 06:40:03.551717  480112 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 06:40:03.551723  480112 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 06:40:03.551730  480112 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 06:40:03.551736  480112 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 06:40:03.551740  480112 command_runner.go:130] > #   The currently recognized values are:
	I1205 06:40:03.551747  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 06:40:03.551754  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 06:40:03.551761  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 06:40:03.551767  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 06:40:03.551774  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 06:40:03.551781  480112 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 06:40:03.551788  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 06:40:03.551794  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 06:40:03.551800  480112 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 06:40:03.551807  480112 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1205 06:40:03.551813  480112 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1205 06:40:03.551819  480112 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1205 06:40:03.551828  480112 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1205 06:40:03.551834  480112 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1205 06:40:03.551840  480112 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1205 06:40:03.551848  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1205 06:40:03.551854  480112 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 06:40:03.551858  480112 command_runner.go:130] > #   deprecated option "conmon".
	I1205 06:40:03.551865  480112 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 06:40:03.551870  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 06:40:03.551877  480112 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 06:40:03.551882  480112 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 06:40:03.551888  480112 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1205 06:40:03.551893  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 06:40:03.551900  480112 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1205 06:40:03.551907  480112 command_runner.go:130] > #   conmon-rs by using:
	I1205 06:40:03.551915  480112 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1205 06:40:03.551924  480112 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1205 06:40:03.551931  480112 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1205 06:40:03.551937  480112 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 06:40:03.551943  480112 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 06:40:03.551950  480112 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1205 06:40:03.551958  480112 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1205 06:40:03.551964  480112 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1205 06:40:03.551971  480112 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1205 06:40:03.551979  480112 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1205 06:40:03.551983  480112 command_runner.go:130] > #   when a machine crash happens.
	I1205 06:40:03.551990  480112 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1205 06:40:03.551997  480112 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1205 06:40:03.552005  480112 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1205 06:40:03.552009  480112 command_runner.go:130] > #   seccomp profile for the runtime.
	I1205 06:40:03.552015  480112 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1205 06:40:03.552022  480112 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1205 06:40:03.552025  480112 command_runner.go:130] > #
	I1205 06:40:03.552029  480112 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 06:40:03.552032  480112 command_runner.go:130] > #
	I1205 06:40:03.552038  480112 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 06:40:03.552044  480112 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 06:40:03.552046  480112 command_runner.go:130] > #
	I1205 06:40:03.552053  480112 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 06:40:03.552058  480112 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 06:40:03.552061  480112 command_runner.go:130] > #
	I1205 06:40:03.552067  480112 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 06:40:03.552070  480112 command_runner.go:130] > # feature.
	I1205 06:40:03.552072  480112 command_runner.go:130] > #
	I1205 06:40:03.552078  480112 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 06:40:03.552085  480112 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 06:40:03.552090  480112 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 06:40:03.552104  480112 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 06:40:03.552111  480112 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 06:40:03.552114  480112 command_runner.go:130] > #
	I1205 06:40:03.552121  480112 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 06:40:03.552127  480112 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 06:40:03.552129  480112 command_runner.go:130] > #
	I1205 06:40:03.552135  480112 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 06:40:03.552141  480112 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 06:40:03.552144  480112 command_runner.go:130] > #
	I1205 06:40:03.552150  480112 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 06:40:03.552156  480112 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 06:40:03.552159  480112 command_runner.go:130] > # limitation.
	I1205 06:40:03.552163  480112 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1205 06:40:03.552167  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1205 06:40:03.552170  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552174  480112 command_runner.go:130] > runtime_root = "/run/crun"
	I1205 06:40:03.552178  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552182  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552188  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552193  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552197  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552200  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552204  480112 command_runner.go:130] > allowed_annotations = [
	I1205 06:40:03.552208  480112 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1205 06:40:03.552211  480112 command_runner.go:130] > ]
	I1205 06:40:03.552215  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552219  480112 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 06:40:03.552223  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1205 06:40:03.552226  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552230  480112 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 06:40:03.552234  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552237  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552241  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552248  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552252  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552256  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552260  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552267  480112 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 06:40:03.552272  480112 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 06:40:03.552278  480112 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 06:40:03.552286  480112 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 06:40:03.552300  480112 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1205 06:40:03.552310  480112 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1205 06:40:03.552319  480112 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1205 06:40:03.552324  480112 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 06:40:03.552334  480112 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 06:40:03.552342  480112 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 06:40:03.552349  480112 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 06:40:03.552356  480112 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 06:40:03.552359  480112 command_runner.go:130] > # Example:
	I1205 06:40:03.552364  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 06:40:03.552368  480112 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 06:40:03.552373  480112 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 06:40:03.552382  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 06:40:03.552385  480112 command_runner.go:130] > # cpuset = "0-1"
	I1205 06:40:03.552389  480112 command_runner.go:130] > # cpushares = "5"
	I1205 06:40:03.552392  480112 command_runner.go:130] > # cpuquota = "1000"
	I1205 06:40:03.552396  480112 command_runner.go:130] > # cpuperiod = "100000"
	I1205 06:40:03.552399  480112 command_runner.go:130] > # cpulimit = "35"
	I1205 06:40:03.552402  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.552406  480112 command_runner.go:130] > # The workload name is workload-type.
	I1205 06:40:03.552413  480112 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 06:40:03.552419  480112 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 06:40:03.552424  480112 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 06:40:03.552432  480112 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 06:40:03.552438  480112 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 06:40:03.552445  480112 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 06:40:03.552452  480112 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 06:40:03.552456  480112 command_runner.go:130] > # Default value is set to true
	I1205 06:40:03.552461  480112 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 06:40:03.552466  480112 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 06:40:03.552471  480112 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 06:40:03.552475  480112 command_runner.go:130] > # Default value is set to 'false'
	I1205 06:40:03.552479  480112 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 06:40:03.552484  480112 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1205 06:40:03.552492  480112 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1205 06:40:03.552495  480112 command_runner.go:130] > # timezone = ""
	I1205 06:40:03.552502  480112 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 06:40:03.552504  480112 command_runner.go:130] > #
	I1205 06:40:03.552510  480112 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 06:40:03.552517  480112 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1205 06:40:03.552520  480112 command_runner.go:130] > [crio.image]
	I1205 06:40:03.552526  480112 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 06:40:03.552530  480112 command_runner.go:130] > # default_transport = "docker://"
	I1205 06:40:03.552536  480112 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 06:40:03.552543  480112 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552547  480112 command_runner.go:130] > # global_auth_file = ""
	I1205 06:40:03.552552  480112 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 06:40:03.552557  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552561  480112 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.552568  480112 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 06:40:03.552574  480112 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552581  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552585  480112 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 06:40:03.552591  480112 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 06:40:03.552597  480112 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 06:40:03.552603  480112 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 06:40:03.552608  480112 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 06:40:03.552612  480112 command_runner.go:130] > # pause_command = "/pause"
	I1205 06:40:03.552622  480112 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 06:40:03.552628  480112 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 06:40:03.552641  480112 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 06:40:03.552646  480112 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 06:40:03.552652  480112 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 06:40:03.552658  480112 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 06:40:03.552661  480112 command_runner.go:130] > # pinned_images = [
	I1205 06:40:03.552664  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552670  480112 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 06:40:03.552675  480112 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 06:40:03.552681  480112 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 06:40:03.552687  480112 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 06:40:03.552692  480112 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 06:40:03.552697  480112 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1205 06:40:03.552702  480112 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 06:40:03.552708  480112 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 06:40:03.552716  480112 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 06:40:03.552722  480112 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 06:40:03.552728  480112 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 06:40:03.552733  480112 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 06:40:03.552738  480112 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 06:40:03.552746  480112 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 06:40:03.552749  480112 command_runner.go:130] > # changing them here.
	I1205 06:40:03.552755  480112 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1205 06:40:03.552758  480112 command_runner.go:130] > # insecure_registries = [
	I1205 06:40:03.552761  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552767  480112 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 06:40:03.552772  480112 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 06:40:03.552776  480112 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 06:40:03.552780  480112 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 06:40:03.553031  480112 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 06:40:03.553083  480112 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1205 06:40:03.553106  480112 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1205 06:40:03.553125  480112 command_runner.go:130] > # auto_reload_registries = false
	I1205 06:40:03.553145  480112 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1205 06:40:03.553166  480112 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1205 06:40:03.553207  480112 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1205 06:40:03.553227  480112 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1205 06:40:03.553245  480112 command_runner.go:130] > # The mode of short name resolution.
	I1205 06:40:03.553268  480112 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1205 06:40:03.553288  480112 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1205 06:40:03.553305  480112 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1205 06:40:03.553320  480112 command_runner.go:130] > # short_name_mode = "enforcing"
	I1205 06:40:03.553338  480112 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1205 06:40:03.553365  480112 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1205 06:40:03.553538  480112 command_runner.go:130] > # oci_artifact_mount_support = true
	I1205 06:40:03.553551  480112 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 06:40:03.553555  480112 command_runner.go:130] > # CNI plugins.
	I1205 06:40:03.553559  480112 command_runner.go:130] > [crio.network]
	I1205 06:40:03.553564  480112 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 06:40:03.553570  480112 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 06:40:03.553574  480112 command_runner.go:130] > # cni_default_network = ""
	I1205 06:40:03.553580  480112 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 06:40:03.553587  480112 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 06:40:03.553592  480112 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 06:40:03.553597  480112 command_runner.go:130] > # plugin_dirs = [
	I1205 06:40:03.553600  480112 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 06:40:03.553603  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553607  480112 command_runner.go:130] > # List of included pod metrics.
	I1205 06:40:03.553616  480112 command_runner.go:130] > # included_pod_metrics = [
	I1205 06:40:03.553620  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553625  480112 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 06:40:03.553628  480112 command_runner.go:130] > [crio.metrics]
	I1205 06:40:03.553634  480112 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 06:40:03.553637  480112 command_runner.go:130] > # enable_metrics = false
	I1205 06:40:03.553641  480112 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 06:40:03.553646  480112 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 06:40:03.553655  480112 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 06:40:03.553661  480112 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 06:40:03.553670  480112 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 06:40:03.553675  480112 command_runner.go:130] > # metrics_collectors = [
	I1205 06:40:03.553679  480112 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 06:40:03.553683  480112 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 06:40:03.553687  480112 command_runner.go:130] > # 	"containers_oom_total",
	I1205 06:40:03.553691  480112 command_runner.go:130] > # 	"processes_defunct",
	I1205 06:40:03.553695  480112 command_runner.go:130] > # 	"operations_total",
	I1205 06:40:03.553699  480112 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 06:40:03.553703  480112 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 06:40:03.553707  480112 command_runner.go:130] > # 	"operations_errors_total",
	I1205 06:40:03.553711  480112 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 06:40:03.553715  480112 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 06:40:03.553719  480112 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 06:40:03.553723  480112 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 06:40:03.553727  480112 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 06:40:03.553731  480112 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 06:40:03.553736  480112 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 06:40:03.553740  480112 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 06:40:03.553744  480112 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1205 06:40:03.553747  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553753  480112 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1205 06:40:03.553758  480112 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1205 06:40:03.553763  480112 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 06:40:03.553767  480112 command_runner.go:130] > # metrics_port = 9090
	I1205 06:40:03.553772  480112 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 06:40:03.553775  480112 command_runner.go:130] > # metrics_socket = ""
	I1205 06:40:03.553780  480112 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 06:40:03.553786  480112 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 06:40:03.553792  480112 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 06:40:03.553798  480112 command_runner.go:130] > # certificate on any modification event.
	I1205 06:40:03.553802  480112 command_runner.go:130] > # metrics_cert = ""
	I1205 06:40:03.553807  480112 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 06:40:03.553812  480112 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 06:40:03.553822  480112 command_runner.go:130] > # metrics_key = ""
	I1205 06:40:03.553828  480112 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 06:40:03.553831  480112 command_runner.go:130] > [crio.tracing]
	I1205 06:40:03.553836  480112 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 06:40:03.553841  480112 command_runner.go:130] > # enable_tracing = false
	I1205 06:40:03.553846  480112 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 06:40:03.553850  480112 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1205 06:40:03.553857  480112 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 06:40:03.553861  480112 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 06:40:03.553865  480112 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 06:40:03.553868  480112 command_runner.go:130] > [crio.nri]
	I1205 06:40:03.553872  480112 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 06:40:03.553876  480112 command_runner.go:130] > # enable_nri = true
	I1205 06:40:03.553880  480112 command_runner.go:130] > # NRI socket to listen on.
	I1205 06:40:03.553884  480112 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 06:40:03.553888  480112 command_runner.go:130] > # NRI plugin directory to use.
	I1205 06:40:03.553893  480112 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 06:40:03.553898  480112 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 06:40:03.553902  480112 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 06:40:03.553908  480112 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 06:40:03.553979  480112 command_runner.go:130] > # nri_disable_connections = false
	I1205 06:40:03.553985  480112 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 06:40:03.553990  480112 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 06:40:03.553995  480112 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 06:40:03.554000  480112 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 06:40:03.554004  480112 command_runner.go:130] > # NRI default validator configuration.
	I1205 06:40:03.554011  480112 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1205 06:40:03.554017  480112 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1205 06:40:03.554021  480112 command_runner.go:130] > # can be restricted/rejected:
	I1205 06:40:03.554025  480112 command_runner.go:130] > # - OCI hook injection
	I1205 06:40:03.554030  480112 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1205 06:40:03.554035  480112 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1205 06:40:03.554039  480112 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1205 06:40:03.554047  480112 command_runner.go:130] > # - adjustment of linux namespaces
	I1205 06:40:03.554054  480112 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1205 06:40:03.554060  480112 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1205 06:40:03.554066  480112 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1205 06:40:03.554070  480112 command_runner.go:130] > #
	I1205 06:40:03.554075  480112 command_runner.go:130] > # [crio.nri.default_validator]
	I1205 06:40:03.554079  480112 command_runner.go:130] > # nri_enable_default_validator = false
	I1205 06:40:03.554084  480112 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1205 06:40:03.554090  480112 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1205 06:40:03.554095  480112 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1205 06:40:03.554101  480112 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1205 06:40:03.554106  480112 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1205 06:40:03.554110  480112 command_runner.go:130] > # nri_validator_required_plugins = [
	I1205 06:40:03.554113  480112 command_runner.go:130] > # ]
	I1205 06:40:03.554118  480112 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1205 06:40:03.554124  480112 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 06:40:03.554127  480112 command_runner.go:130] > [crio.stats]
	I1205 06:40:03.554133  480112 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 06:40:03.554138  480112 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 06:40:03.554142  480112 command_runner.go:130] > # stats_collection_period = 0
	I1205 06:40:03.554148  480112 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1205 06:40:03.554154  480112 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1205 06:40:03.554158  480112 command_runner.go:130] > # collection_period = 0
	I1205 06:40:03.556162  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527241832Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1205 06:40:03.556207  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527278608Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1205 06:40:03.556230  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527308122Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1205 06:40:03.556255  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.52733264Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1205 06:40:03.556280  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527409367Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.556295  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527814951Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1205 06:40:03.556306  480112 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 06:40:03.556383  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:40:03.556397  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:40:03.556420  480112 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:40:03.556447  480112 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:40:03.556582  480112 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:40:03.556659  480112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:40:03.563611  480112 command_runner.go:130] > kubeadm
	I1205 06:40:03.563630  480112 command_runner.go:130] > kubectl
	I1205 06:40:03.563636  480112 command_runner.go:130] > kubelet
	I1205 06:40:03.564590  480112 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:40:03.564681  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:40:03.572146  480112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:40:03.584914  480112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:40:03.598402  480112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 06:40:03.610806  480112 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:40:03.614247  480112 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:40:03.614336  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.749526  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:04.526831  480112 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:40:04.526920  480112 certs.go:195] generating shared ca certs ...
	I1205 06:40:04.526970  480112 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:04.527146  480112 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:40:04.527262  480112 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:40:04.527298  480112 certs.go:257] generating profile certs ...
	I1205 06:40:04.527454  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:40:04.527572  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:40:04.527654  480112 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:40:04.527683  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:40:04.527717  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:40:04.527750  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:40:04.527779  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:40:04.527812  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:40:04.527845  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:40:04.527901  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:40:04.527942  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:40:04.528018  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:40:04.528084  480112 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:40:04.528110  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:40:04.528175  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:40:04.528223  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:40:04.528266  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:40:04.528351  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:04.528416  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.528448  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem -> /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.528484  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.529122  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:40:04.549434  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:40:04.568942  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:40:04.588032  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:40:04.616779  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:40:04.636137  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:40:04.655504  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:40:04.673755  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:40:04.692822  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:40:04.711199  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:40:04.730794  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:40:04.748559  480112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:40:04.762229  480112 ssh_runner.go:195] Run: openssl version
	I1205 06:40:04.768327  480112 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:40:04.768697  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.776287  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:40:04.784133  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788189  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788221  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788277  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.829541  480112 command_runner.go:130] > b5213941
	I1205 06:40:04.829985  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:40:04.837884  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.845797  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:40:04.853974  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.857841  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858230  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858295  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.900152  480112 command_runner.go:130] > 51391683
	I1205 06:40:04.900696  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:40:04.908660  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.916381  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:40:04.924345  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928449  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928489  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928538  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.969475  480112 command_runner.go:130] > 3ec20f2e
	I1205 06:40:04.969979  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:40:04.977627  480112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981676  480112 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981703  480112 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:40:04.981710  480112 command_runner.go:130] > Device: 259,1	Inode: 1046940     Links: 1
	I1205 06:40:04.981717  480112 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:04.981724  480112 command_runner.go:130] > Access: 2025-12-05 06:35:56.052204819 +0000
	I1205 06:40:04.981729  480112 command_runner.go:130] > Modify: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981735  480112 command_runner.go:130] > Change: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981741  480112 command_runner.go:130] >  Birth: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981812  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:40:05.025511  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.026281  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:40:05.067472  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.067923  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:40:05.109199  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.110439  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:40:05.151291  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.151789  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:40:05.192630  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.193112  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:40:05.234917  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.235493  480112 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:40:05.235576  480112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:40:05.235658  480112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:40:05.274773  480112 cri.go:89] found id: ""
	I1205 06:40:05.274854  480112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:40:05.284543  480112 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:40:05.284569  480112 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:40:05.284576  480112 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:40:05.284587  480112 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:40:05.284593  480112 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:40:05.284641  480112 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:40:05.293745  480112 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:40:05.294169  480112 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-787602" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.294277  480112 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-441321/kubeconfig needs updating (will repair): [kubeconfig missing "functional-787602" cluster setting kubeconfig missing "functional-787602" context setting]
	I1205 06:40:05.294658  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.295082  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.295239  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.295723  480112 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:40:05.295760  480112 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:40:05.295766  480112 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:40:05.295771  480112 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:40:05.295779  480112 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:40:05.296148  480112 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:40:05.296228  480112 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:40:05.305058  480112 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:40:05.305103  480112 kubeadm.go:602] duration metric: took 20.504477ms to restartPrimaryControlPlane
	I1205 06:40:05.305113  480112 kubeadm.go:403] duration metric: took 69.632192ms to StartCluster
	I1205 06:40:05.305127  480112 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305185  480112 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.305773  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305969  480112 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:40:05.306285  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:05.306340  480112 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:40:05.306433  480112 addons.go:70] Setting storage-provisioner=true in profile "functional-787602"
	I1205 06:40:05.306448  480112 addons.go:239] Setting addon storage-provisioner=true in "functional-787602"
	I1205 06:40:05.306452  480112 addons.go:70] Setting default-storageclass=true in profile "functional-787602"
	I1205 06:40:05.306473  480112 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-787602"
	I1205 06:40:05.306480  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.306771  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.306997  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.310651  480112 out.go:179] * Verifying Kubernetes components...
	I1205 06:40:05.313979  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:05.339795  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.340007  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.340282  480112 addons.go:239] Setting addon default-storageclass=true in "functional-787602"
	I1205 06:40:05.340312  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.340728  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.361959  480112 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:40:05.364893  480112 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.364921  480112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:40:05.364987  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.384451  480112 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:05.384479  480112 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:40:05.384563  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.411372  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.432092  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.510112  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:05.550609  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.557147  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.275527  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275618  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275677  480112 retry.go:31] will retry after 247.926554ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275753  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275786  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275814  480112 retry.go:31] will retry after 139.276641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275869  480112 node_ready.go:35] waiting up to 6m0s for node "functional-787602" to be "Ready" ...
	I1205 06:40:06.275986  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.276069  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.276382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.415646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.474935  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.474981  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.475001  480112 retry.go:31] will retry after 366.421161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.524197  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.584795  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.584843  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.584873  480112 retry.go:31] will retry after 312.76439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.776120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.776655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.841962  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.898526  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.904086  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.904127  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.904149  480112 retry.go:31] will retry after 740.273906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.959857  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.963461  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.963497  480112 retry.go:31] will retry after 759.965783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.276975  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.277072  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.277469  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.645230  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:07.705790  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.705833  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.705854  480112 retry.go:31] will retry after 642.466008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.724048  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:07.776045  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.776157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.791584  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.795338  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.795382  480112 retry.go:31] will retry after 614.279076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:08.276605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:08.348828  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:08.405271  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.408500  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.408576  480112 retry.go:31] will retry after 1.343995427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.410740  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:08.473489  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.473541  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.473564  480112 retry.go:31] will retry after 1.078913702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.777094  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.777222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.777651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.276356  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.276453  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.276780  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.553646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:09.614016  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.614089  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.614116  480112 retry.go:31] will retry after 2.379780781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.753405  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:09.777031  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.777132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.777482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.813171  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.813239  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.813272  480112 retry.go:31] will retry after 1.978465808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:10.276816  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.277257  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:10.277348  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:10.776020  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.776102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.776363  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.276081  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.276155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.791876  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:11.850961  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:11.851011  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.851047  480112 retry.go:31] will retry after 1.715194365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.994161  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:12.058032  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:12.058079  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.058098  480112 retry.go:31] will retry after 2.989540966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.276377  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.276451  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.276701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:12.776111  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:12.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:13.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.276532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:13.567026  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:13.620219  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:13.623514  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.623554  480112 retry.go:31] will retry after 5.458226005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.776806  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.776876  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.777207  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.277043  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.277126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.277411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:14.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:15.048089  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:15.111053  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:15.111091  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.111112  480112 retry.go:31] will retry after 5.631155228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.276375  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.276443  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:15.776648  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.777039  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.276857  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.276930  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.776968  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.777300  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:16.777347  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:17.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.276495  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:17.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.276064  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.276137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.276439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:19.082075  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:19.143244  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:19.143293  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.143314  480112 retry.go:31] will retry after 4.646546475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.276638  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.276712  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.277087  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:19.277141  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:19.776926  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.777007  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.777341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.743196  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:20.776726  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.776805  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.777070  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.801108  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:20.801144  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:20.801162  480112 retry.go:31] will retry after 9.136671028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:21.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.276973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.277268  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:21.277311  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:21.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.276249  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.776313  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.776619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.276136  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.776172  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.776265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:23.776664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:23.790980  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:23.852305  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:23.852351  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:23.852373  480112 retry.go:31] will retry after 4.852638111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:24.276878  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.276951  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.277225  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:24.776145  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.276631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.776628  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.776885  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:25.776924  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:26.276685  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.276766  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.277101  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:26.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.777045  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.777350  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.277008  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.277082  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.776144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:28.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:28.276571  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:28.705256  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:28.766465  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:28.766519  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.766541  480112 retry.go:31] will retry after 15.718503653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.777014  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.276890  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.276967  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.277333  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.776501  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.776578  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.776920  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.938493  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:30.002212  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:30.002257  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.002277  480112 retry.go:31] will retry after 5.082732051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.276542  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.276613  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.276880  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:30.276935  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:30.776666  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.777100  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.276768  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.276846  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.277194  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.776934  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.777009  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.777271  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.276395  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.276491  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.276813  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.776164  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.776245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:32.776649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:33.776151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.276378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.776431  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.776497  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.776750  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:34.776788  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:35.085301  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:35.148531  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:35.152882  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.152918  480112 retry.go:31] will retry after 11.086200752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.276603  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:35.777106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.777182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.777443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:37.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.276271  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:37.276633  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:37.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.776452  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.276021  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.276100  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.276361  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.776193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:39.776575  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:40.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:40.776012  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.776078  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.276540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.776119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.776197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:42.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:42.276631  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:42.776277  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.776365  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.776691  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.776091  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.276566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.485984  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:44.554072  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:44.557893  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.557927  480112 retry.go:31] will retry after 22.628614414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.776735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:44.776781  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:45.276131  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:45.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.239320  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:46.276723  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.277080  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.296820  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:46.296888  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.296909  480112 retry.go:31] will retry after 16.475007469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.776108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.776261  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:47.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.276550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:47.276621  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.276087  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.276413  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.776153  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:49.276252  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:49.276666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:49.776457  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.776539  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.776814  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.276579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.776648  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.276069  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:51.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:52.276278  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.276689  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:52.776334  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.776409  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.776733  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.276508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:54.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.276515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:54.276568  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:54.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.776530  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.776853  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.276690  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.276781  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.277125  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.777039  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.777119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.777385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.276092  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.276176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.276480  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:56.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:57.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:57.776222  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.776307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.776615  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.276119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.776243  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.776317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:58.776608  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:59.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.276178  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:59.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.776216  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.776291  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.776616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:00.776689  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:01.276381  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.276456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.276781  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:01.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.772181  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:02.776748  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.776818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.777092  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:02.777132  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:02.828748  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:02.831873  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:02.831907  480112 retry.go:31] will retry after 23.767145255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:03.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.276184  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:03.776136  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.276224  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.276300  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.776644  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.776715  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.777004  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:05.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.276924  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.277214  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:05.277261  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:05.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.776532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.276212  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.776196  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.776601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:07.187370  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:07.246801  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:07.246844  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.246863  480112 retry.go:31] will retry after 35.018877023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.277002  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.277431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:07.277488  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:07.777040  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.777122  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.777377  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.776342  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.776663  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.276083  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.276449  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.776565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:09.776619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:10.276305  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.276400  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.276764  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:10.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.776250  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.776174  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:12.276067  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.276164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.276478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:12.276527  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:12.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.776235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.776538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.776688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:14.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.276498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:14.276544  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:14.776236  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.776308  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.776593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.276150  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.276414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.276430  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.776438  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:16.776478  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:17.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.276289  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.276578  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:17.776257  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.776333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.776671  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.276301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.276556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.776250  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.776326  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.776636  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:18.776691  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:19.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.276800  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:19.776580  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.776660  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.276771  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.276848  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.277227  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.777060  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.777195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.777544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:20.777604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:21.276075  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.276451  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:21.776144  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.276241  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.276600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.776229  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.776301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.776581  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:23.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.276514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:23.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:23.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:25.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.276273  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:25.276662  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:25.776023  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.776090  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.776414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.276503  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.599995  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:26.657664  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660860  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660976  480112 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:26.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.776182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.276161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.276457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:27.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:28.776230  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.776304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.776618  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.276325  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.276412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.276735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.776649  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.777083  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:29.777135  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:30.276977  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.277054  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.277385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:30.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.776171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.276794  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.276886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.277179  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.776939  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.777016  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.777293  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:31.777332  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:32.276045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:32.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.276086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:34.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.276702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:34.276756  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:34.776367  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.776450  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.776713  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.276380  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.276460  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.276788  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.776775  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.776849  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.777195  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:36.276774  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.276844  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.277103  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:36.277142  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:36.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.777059  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.777387  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.276088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.276166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:38.776611  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:39.276263  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.276598  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:39.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.776292  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.776600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:40.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:41.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:41.776294  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.776370  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.776711  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.266330  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:42.277622  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.277694  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.277960  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.360709  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361696  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361795  480112 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:42.365007  480112 out.go:179] * Enabled addons: 
	I1205 06:41:42.368666  480112 addons.go:530] duration metric: took 1m37.062317768s for enable addons: enabled=[]
	I1205 06:41:42.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:43.276187  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.276263  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.276622  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:43.276733  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:43.776170  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.776430  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.776531  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.776876  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:45.276870  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.277032  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.277745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:45.277837  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:45.776642  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.776716  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.777050  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.276852  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.276927  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.277279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.777045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.777126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.777396  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.776088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.776166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.776536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:47.776604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:48.276249  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.276340  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.276655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:48.776130  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.776156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.776445  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:50.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.276229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:50.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:50.776285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.776369  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.776728  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.276429  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.276528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.276792  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.776116  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.276150  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.776085  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:52.776560  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:53.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.276685  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:53.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.776608  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.276706  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.776729  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.776832  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.777167  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:54.777215  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:55.276948  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.277018  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:55.776048  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.776114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.776379  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.776084  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:57.276044  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.276370  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:57.276409  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:57.776074  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.776175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.776534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.276474  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.776435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:59.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.276461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:59.276501  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:59.776413  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.776495  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.776828  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.276523  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.276611  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.276928  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.776709  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.776788  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.777104  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:01.276864  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.276950  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.277320  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:01.277377  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:01.776897  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.776970  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.777279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.276047  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.276127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.776476  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:03.277070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.277155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.277407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:03.277449  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:03.776150  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.776586  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.276303  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.276387  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.276681  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.776711  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.776794  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.782759  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:42:05.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.276619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:05.776366  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.776442  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.776784  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:05.776836  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:06.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:06.776160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.776573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.276332  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.276414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.276772  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.776264  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.776337  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.776591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:08.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.276230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:08.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:08.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.776787  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.276425  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.776266  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:10.276403  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.276479  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.276767  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:10.276814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:10.776451  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.776520  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.776795  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.276636  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.276714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.277054  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.776915  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.776994  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.777329  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.276040  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.276407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.776109  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:12.776597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:13.277062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.277174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.277498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:13.776187  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.776262  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.276253  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.276688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.776496  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.776570  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.776890  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:14.776948  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:15.276665  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.276733  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.277042  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:15.776898  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.777305  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.276026  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.276107  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.776492  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:17.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.276185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:17.276576  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:17.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.776357  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.776675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.276109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:19.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:19.276619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:19.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.776483  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.776740  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.776163  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.776556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.276156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.776270  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:21.776630  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:22.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:22.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.776267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.276270  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.276346  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.276675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.776093  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:24.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:24.276482  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:24.776113  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.776187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.776592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:26.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:26.276597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:26.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.776223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.776559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.276255  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.276327  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.776352  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.776694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.776441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:28.776496  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:29.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.276214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:29.776193  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.776294  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.776633  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.276354  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.276958  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.776754  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.776886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.777216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:30.777271  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:31.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.276997  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.277353  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:31.776905  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.777239  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.276031  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.276129  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:33.276005  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.276326  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:33.276364  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:33.776056  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.776130  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.276252  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.276601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.776170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.776439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:35.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.276502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:35.276548  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:35.776412  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.776485  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.776805  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.276468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:37.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:37.276578  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:37.776073  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.776140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.276191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.276447  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.776154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.776231  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:39.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:40.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.776168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.776483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.776311  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.776412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:41.776800  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:42.276460  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.276533  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.276835  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:42.776147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.276274  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.776371  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:44.276399  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.276475  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.276774  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:44.276818  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:44.776823  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.776896  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.777260  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.277015  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.277165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.277467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.276290  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.276372  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.276755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.776351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.776696  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:46.776865  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:47.276162  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.276562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:47.776400  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.776503  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.276644  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.276723  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.276978  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.776820  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.776899  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.777234  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:48.777287  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:49.277045  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.277135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.277475  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:49.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.776484  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.276116  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.276446  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.776127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.776478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:51.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:51.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.776200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.276504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.776090  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.776470  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.276544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:53.776655  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:54.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:54.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.776393  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.776463  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.776718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:55.776760  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:56.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:56.776279  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.776355  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.276419  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.776121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:58.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:58.276716  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:58.776027  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.776099  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.776349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.276501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.776817  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.776902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.777233  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:00.277352  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.277456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.277768  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:00.277814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:00.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.776654  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.776901  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.776971  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.777244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.277063  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.277162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.277496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:02.776546  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:03.276099  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:03.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.276644  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.776637  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.776900  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:04.776951  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:05.276709  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:05.776977  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.777064  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.777431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.776100  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.776494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:07.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:07.276607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:07.776246  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.776316  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.276194  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.276266  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.276528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.776304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.776699  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:09.776757  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:10.276472  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.276560  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.276905  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:10.776647  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.776717  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.776986  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.276873  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.277209  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.777023  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.777098  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.777457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:11.777510  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:12.276089  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.276172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:12.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.276276  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.276351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.276678  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.776139  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:14.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.276507  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:14.276551  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:14.776223  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.776621  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.276486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:16.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:16.276663  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:16.776324  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.776396  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.776758  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.276146  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.776575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.276431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:18.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:19.276304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.276385  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.276748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:19.776687  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.776760  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.777008  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.276923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.277244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.777028  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.777103  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.777448  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:20.777499  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:21.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:21.776177  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.776259  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.776596  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.276311  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.276394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.276742  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.776321  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.776716  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:23.276454  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.276541  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.276962  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:23.277021  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:23.776836  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.776923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.777277  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.277022  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.277402  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.776322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.276424  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.276715  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.776566  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.776639  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.776913  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:25.776962  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:26.276795  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.276868  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.277314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:26.776043  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.776120  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.276458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:28.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.276267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:28.276648  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:28.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.776457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.276201  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.776229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.776584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:30.776585  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:31.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.276684  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:31.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.776434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.776375  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.776708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:32.776765  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.276143  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.276404  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:33.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.276299  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.276386  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.276745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.776389  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:35.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:35.276620  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:35.776302  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.776730  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.276513  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.776244  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.776582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.776217  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:37.776588  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:38.276170  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.276253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.276591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:38.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.776702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.276158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.776295  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.776693  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:39.776750  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.276217  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.276537  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:40.776079  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.776268  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.776641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:42.276112  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.276194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:42.276522  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:42.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.776576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.276319  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.276422  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.276770  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.776459  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.776529  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.776862  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:44.276624  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.276703  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.277019  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:44.277073  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:44.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.776964  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.777314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.276046  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.276131  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.276394  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.776391  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.776465  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.276420  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.276883  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.776662  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.776730  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.776998  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:46.777043  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:47.276766  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.276837  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.277173  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:47.776961  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.777038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.777378  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.277033  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.277382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.776065  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.776137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.776471  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:49.276093  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:49.276562  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:49.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.276235  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.276637  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.276152  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.276423  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:51.776605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:52.276271  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.276672  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:52.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.776729  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.276218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.776158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.776239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:54.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:54.276664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:54.776326  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.776403  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.776723  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.276353  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.276436  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.276747  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.776688  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.776759  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.777015  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:56.276831  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.276902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.277216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:56.277268  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:56.776881  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.776955  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.777297  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.276004  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.276075  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.276309  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.776346  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.776677  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:58.776715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:59.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.276197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:59.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.776524  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.776869  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.276462  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.276555  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.276854  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.776547  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.776618  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.776940  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:00.776993  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:01.276491  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.276575  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.276927  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:01.776492  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.776564  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.776833  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.276163  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.276584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.776165  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:03.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:03.276467  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:03.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.276109  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.776096  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:05.276414  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.276498  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.276881  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:05.276923  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:05.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.776194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.276351  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.276427  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.276786  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.276368  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.276703  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:07.776509  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:08.276197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.276613  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:08.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.776428  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.776751  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.276440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.776287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.776623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:09.776680  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:10.276196  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.276274  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.276577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:10.776259  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.776330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.776668  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.276219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.776276  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.776353  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.776679  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:11.776729  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:12.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:12.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.776183  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.276219  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.276630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.776971  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.777044  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.777316  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:13.777359  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:14.276035  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.276110  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:14.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.776211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.776155  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.776242  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.776630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:16.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.276312  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:16.276712  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:16.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.776158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.276169  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.776291  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.776701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.276007  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.276319  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.776002  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.776084  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:18.776517  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:19.276181  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.276257  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:19.776049  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.776119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.776371  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:20.776581  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:21.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.276174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:21.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.776659  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.276365  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.276438  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.276776  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.776225  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.776811  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:22.776918  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:23.276742  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.276818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.277175  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:23.777069  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.777161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.777559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.276175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.776173  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:25.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.276345  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.276694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:25.276749  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:25.776415  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.776487  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.776789  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.276167  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.776128  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:27.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:28.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.776254  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.776579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.276542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.776485  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.776594  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.776923  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:29.776981  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:30.276085  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:30.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.776542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.276152  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.276609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:32.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:32.276593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:32.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.776213  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.776635  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:34.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:34.276654  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:34.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.776308  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.776391  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.776737  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.276100  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.276170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.276424  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.776558  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:36.776623  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:37.276145  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:37.776081  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.776159  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.276595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.776175  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.776258  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:38.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:39.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.276167  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:39.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.276186  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.276264  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.276597  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.776207  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.776284  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:41.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.276518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:41.276567  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:41.776214  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.776290  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.776631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.276210  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.276285  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:43.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:43.276715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:43.776156  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.776564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.276236  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.276330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.276658  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.776699  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.776770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.777048  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:45.276733  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.276817  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.277141  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:45.277194  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:45.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.776557  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.276336  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.276649  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.776440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.276545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.776314  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:47.776761  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:48.276091  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:48.776114  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.276135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.276211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.276541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.776067  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.776138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.776462  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:50.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:50.276626  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:50.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.776099  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.776010  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.776087  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.776341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:52.776389  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:53.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.276192  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:53.776263  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.776669  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.276617  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.776724  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:54.777107  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:55.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.276974  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.277307  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:55.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.776673  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.276245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.776700  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:57.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.276411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:57.276450  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:57.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.276629  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.776188  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:59.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:59.276596  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:59.776249  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.776665  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.276382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.276785  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.776579  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.776666  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.777193  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.776510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:01.776573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:02.276144  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:02.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.776588  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.276207  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.276642  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.776382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.776473  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.776816  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:03.776873  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:04.276616  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.276687  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.276947  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:04.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.776956  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.777296  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.276052  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.276135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.276493  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.776076  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.776382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:06.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:06.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:06.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.776318  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.776607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.276647  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.776505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:08.276124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.276525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:08.276582  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:08.776068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.776427  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.776450  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.776528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.776851  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:10.276606  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.276677  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.277000  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:10.277057  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:10.776657  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.776732  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.276811  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.276882  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.277223  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.776851  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.776931  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.777196  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:12.276964  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.277038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.277388  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:12.277445  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:12.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.776553  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.276228  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.276604  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.776145  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.776458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:14.776508  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:15.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.276200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.276794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:15.776653  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.776744  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.777091  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.276715  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.276782  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.277064  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.776929  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.777011  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.777376  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:16.777433  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:17.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.276186  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.276483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:17.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:19.276077  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.276409  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:19.276457  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:19.776140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.276265  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.276339  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.276676  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.776280  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.776606  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:21.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.276210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:21.276636  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:21.776310  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.776390  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.776682  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.276070  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.276144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:23.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.276321  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.276652  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:23.276714  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:23.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.776500  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.276572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.776624  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.776995  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:25.276720  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.276795  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.277096  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:25.277138  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:25.777018  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.777094  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.777486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.776075  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.776258  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.776680  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:27.776737  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:28.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.276623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:28.776331  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.776707  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.276416  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.276493  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.276818  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.776639  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.776714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.776980  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:29.777029  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:30.276781  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.276856  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.277201  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:30.776871  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.776952  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.777288  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.277017  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.277360  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.776747  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.776819  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.777132  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:31.777186  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:32.276950  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.277023  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.277345  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:32.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.776177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.776473  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.276576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.776686  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:34.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.276462  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.276731  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:34.276780  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:34.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.776596  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.776911  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.276784  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.276862  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.277181  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.776969  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.777301  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:36.277066  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.277146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.277501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:36.277569  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:36.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.776185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.276163  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:38.776483  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:39.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.276555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:39.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.776488  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.776826  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.276588  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.276663  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.276937  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.776794  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.776875  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.777212  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:40.777264  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:41.277035  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.277114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.277443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:41.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.776176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.276209  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.276666  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.776161  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.776562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:43.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:43.276647  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:43.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.276530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.776148  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:45.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.276708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:45.276763  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:45.776444  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.776519  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.776847  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.276602  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.276676  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.276921  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.776673  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.776790  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.777114  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:47.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:47.277302  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:47.776975  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.777051  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.777338  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.276041  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.276118  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.276410  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.776030  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.776109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.776395  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.276033  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.276104  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.276393  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:49.776593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:50.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:50.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.776209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:52.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:52.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:52.776182  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.776256  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.776572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.776498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:54.276193  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.276592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:54.276649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:54.776395  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.776470  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.276132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.276389  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.776213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:56.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.276656  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:56.276710  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:56.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.776281  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.776602  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.276123  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.276534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.776290  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.776381  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.776755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.276054  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.276133  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.276434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:58.776554  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:59.276223  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:59.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.276689  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.276784  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.277182  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.776974  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.777053  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.777397  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:00.777455  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:01.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.276181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:01.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.276247  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.276641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:03.276061  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.276138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:03.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:03.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.276265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.776433  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.776505  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.776849  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:05.276666  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.276770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:05.277147  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:05.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:06.276074  480112 node_ready.go:38] duration metric: took 6m0.000169865s for node "functional-787602" to be "Ready" ...
	I1205 06:46:06.279558  480112 out.go:203] 
	W1205 06:46:06.282535  480112 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:46:06.282557  480112 out.go:285] * 
	W1205 06:46:06.284719  480112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:46:06.287525  480112 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289522924Z" level=info msg="Using the internal default seccomp profile"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289531491Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289537603Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289543675Z" level=info msg="RDT not available in the host system"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.289557288Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.290430521Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.290452134Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.290478998Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291097926Z" level=info msg="Conmon does support the --sync option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291117701Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291242224Z" level=info msg="Updated default CNI network name to "
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.291946561Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.292315246Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.29242024Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.33570484Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.335957142Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336033607Z" level=info msg="Create NRI interface"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336191582Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336206983Z" level=info msg="runtime interface created"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336225494Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336231804Z" level=info msg="runtime interface starting up..."
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336238196Z" level=info msg="starting plugins..."
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336255049Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:40:03 functional-787602 crio[6033]: time="2025-12-05T06:40:03.336317647Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:40:03 functional-787602 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:10.525839    9417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.526417    9417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.527915    9417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.528246    9417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:10.529719    9417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:46:10 up  3:28,  0 user,  load average: 0.35, 0.22, 0.48
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:46:08 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:08 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1134.
	Dec 05 06:46:08 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:08 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:08 functional-787602 kubelet[9289]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:08 functional-787602 kubelet[9289]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:08 functional-787602 kubelet[9289]: E1205 06:46:08.860652    9289 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:08 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:08 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:09 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1135.
	Dec 05 06:46:09 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:09 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:09 functional-787602 kubelet[9323]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:09 functional-787602 kubelet[9323]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:09 functional-787602 kubelet[9323]: E1205 06:46:09.552074    9323 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:09 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:09 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:10 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 05 06:46:10 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:10 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:10 functional-787602 kubelet[9369]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:10 functional-787602 kubelet[9369]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:10 functional-787602 kubelet[9369]: E1205 06:46:10.348631    9369 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:10 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:10 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (332.437971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 kubectl -- --context functional-787602 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 kubectl -- --context functional-787602 get pods: exit status 1 (105.383609ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-787602 kubectl -- --context functional-787602 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (327.758743ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-787602 logs -n 25: (1.02724706s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-252233 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh     │ functional-252233 ssh pgrep buildkitd                                                                                                             │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ image   │ functional-252233 image ls --format yaml --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr                                            │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format json --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format table --alsologtostderr                                                                                       │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls                                                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ delete  │ -p functional-252233                                                                                                                              │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ start   │ -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ start   │ -p functional-787602 --alsologtostderr -v=8                                                                                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:39 UTC │                     │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:latest                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add minikube-local-cache-test:functional-787602                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache delete minikube-local-cache-test:functional-787602                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl images                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ cache   │ functional-787602 cache reload                                                                                                                    │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ kubectl │ functional-787602 kubectl -- --context functional-787602 get pods                                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:39:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:39:59.523609  480112 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:39:59.523793  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.523816  480112 out.go:374] Setting ErrFile to fd 2...
	I1205 06:39:59.523837  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.524220  480112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:39:59.524681  480112 out.go:368] Setting JSON to false
	I1205 06:39:59.525943  480112 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12127,"bootTime":1764904673,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:39:59.526021  480112 start.go:143] virtualization:  
	I1205 06:39:59.529485  480112 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:39:59.533299  480112 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:39:59.533430  480112 notify.go:221] Checking for updates...
	I1205 06:39:59.539032  480112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:39:59.542038  480112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:39:59.544821  480112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:39:59.547558  480112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:39:59.550303  480112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:39:59.553653  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:39:59.553793  480112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:39:59.587101  480112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:39:59.587209  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.647016  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.637315829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.647121  480112 docker.go:319] overlay module found
	I1205 06:39:59.650323  480112 out.go:179] * Using the docker driver based on existing profile
	I1205 06:39:59.653400  480112 start.go:309] selected driver: docker
	I1205 06:39:59.653426  480112 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.653516  480112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:39:59.653622  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.713012  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.702941112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.713548  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:39:59.713621  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:39:59.713678  480112 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.716888  480112 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:39:59.719675  480112 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:39:59.722682  480112 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:39:59.725781  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:39:59.725946  480112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:39:59.745247  480112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:39:59.745269  480112 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:39:59.798316  480112 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:40:00.046313  480112 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:40:00.046504  480112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:40:00.046814  480112 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:40:00.046857  480112 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.046933  480112 start.go:364] duration metric: took 43.709µs to acquireMachinesLock for "functional-787602"
	I1205 06:40:00.046950  480112 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:40:00.046969  480112 fix.go:54] fixHost starting: 
	I1205 06:40:00.047287  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:00.049366  480112 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049539  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:40:00.049567  480112 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 224.085µs
	I1205 06:40:00.049597  480112 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:40:00.049636  480112 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049702  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:40:00.049722  480112 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 89.733µs
	I1205 06:40:00.050277  480112 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:40:00.050353  480112 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051458  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:40:00.051500  480112 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.155091ms
	I1205 06:40:00.051529  480112 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051582  480112 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051659  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:40:00.051680  480112 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 114.34µs
	I1205 06:40:00.051702  480112 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051741  480112 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051791  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:40:00.051822  480112 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 83.054µs
	I1205 06:40:00.063751  480112 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064371  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:40:00.064388  480112 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 658.756µs
	I1205 06:40:00.064400  480112 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:40:00.064453  480112 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064504  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:40:00.064510  480112 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 92.349µs
	I1205 06:40:00.064516  480112 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:40:00.064532  480112 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.067976  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:40:00.068029  480112 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 3.495239ms
	I1205 06:40:00.068074  480112 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:40:00.058631  480112 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:40:00.068155  480112 cache.go:87] Successfully saved all images to host disk.
	I1205 06:40:00.156134  480112 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:40:00.156177  480112 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:40:00.160840  480112 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:40:00.160889  480112 machine.go:94] provisionDockerMachine start ...
	I1205 06:40:00.161003  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.232523  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.232876  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.232886  480112 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:40:00.484459  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.484485  480112 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:40:00.484571  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.540991  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.541328  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.541341  480112 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:40:00.761314  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.761404  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.782315  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.782666  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.782689  480112 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:40:00.934901  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:40:00.934930  480112 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:40:00.935005  480112 ubuntu.go:190] setting up certificates
	I1205 06:40:00.935016  480112 provision.go:84] configureAuth start
	I1205 06:40:00.935097  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:00.952439  480112 provision.go:143] copyHostCerts
	I1205 06:40:00.952486  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952527  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:40:00.952543  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952619  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:40:00.952705  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952727  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:40:00.952737  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952765  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:40:00.952809  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952828  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:40:00.952837  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952861  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:40:00.952911  480112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:40:01.160028  480112 provision.go:177] copyRemoteCerts
	I1205 06:40:01.160150  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:40:01.160201  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.184354  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.295740  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:40:01.295812  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:40:01.316925  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:40:01.316986  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:40:01.339507  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:40:01.339574  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:40:01.358710  480112 provision.go:87] duration metric: took 423.67042ms to configureAuth
	I1205 06:40:01.358788  480112 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:40:01.358981  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:01.359104  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.377010  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:01.377340  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:01.377360  480112 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:40:01.723262  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:40:01.723303  480112 machine.go:97] duration metric: took 1.56238873s to provisionDockerMachine
	I1205 06:40:01.723316  480112 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:40:01.723329  480112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:40:01.723398  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:40:01.723446  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.742177  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.847102  480112 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:40:01.850854  480112 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:40:01.850880  480112 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:40:01.850885  480112 command_runner.go:130] > VERSION_ID="12"
	I1205 06:40:01.850889  480112 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:40:01.850897  480112 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:40:01.850901  480112 command_runner.go:130] > ID=debian
	I1205 06:40:01.850906  480112 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:40:01.850910  480112 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:40:01.850918  480112 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:40:01.850955  480112 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:40:01.850978  480112 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:40:01.850990  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:40:01.851049  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:40:01.851138  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:40:01.851149  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /etc/ssl/certs/4441472.pem
	I1205 06:40:01.851230  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:40:01.851237  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> /etc/test/nested/copy/444147/hosts
	I1205 06:40:01.851282  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:40:01.859516  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:01.879483  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:40:01.898655  480112 start.go:296] duration metric: took 175.324245ms for postStartSetup
	I1205 06:40:01.898744  480112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:40:01.898799  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.917838  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.020238  480112 command_runner.go:130] > 18%
	I1205 06:40:02.020354  480112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:40:02.025815  480112 command_runner.go:130] > 160G
	I1205 06:40:02.026493  480112 fix.go:56] duration metric: took 1.979519007s for fixHost
	I1205 06:40:02.026516  480112 start.go:83] releasing machines lock for "functional-787602", held for 1.979574696s
	I1205 06:40:02.026587  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:02.046979  480112 ssh_runner.go:195] Run: cat /version.json
	I1205 06:40:02.047030  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.047280  480112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:40:02.047345  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.081102  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.085747  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.189932  480112 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:40:02.190072  480112 ssh_runner.go:195] Run: systemctl --version
	I1205 06:40:02.280062  480112 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:40:02.282950  480112 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:40:02.282989  480112 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:40:02.283061  480112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:40:02.319896  480112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:40:02.324212  480112 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:40:02.324374  480112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:40:02.324444  480112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:40:02.332670  480112 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:40:02.332736  480112 start.go:496] detecting cgroup driver to use...
	I1205 06:40:02.332774  480112 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:40:02.332831  480112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:40:02.348502  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:40:02.361851  480112 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:40:02.361926  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:40:02.380602  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:40:02.393710  480112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:40:02.522109  480112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:40:02.655884  480112 docker.go:234] disabling docker service ...
	I1205 06:40:02.655958  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:40:02.673330  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:40:02.687649  480112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:40:02.802223  480112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:40:02.930343  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:40:02.944017  480112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:40:02.956898  480112 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 06:40:02.958122  480112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:40:02.958248  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.967567  480112 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:40:02.967712  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.976781  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.985897  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.994984  480112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:40:03.003975  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.013874  480112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.022919  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.032163  480112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:40:03.038816  480112 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:40:03.039990  480112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:40:03.049427  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.175291  480112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:40:03.341374  480112 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:40:03.341477  480112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:40:03.345425  480112 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 06:40:03.345448  480112 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:40:03.345464  480112 command_runner.go:130] > Device: 0,73	Inode: 1755        Links: 1
	I1205 06:40:03.345472  480112 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:03.345477  480112 command_runner.go:130] > Access: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345484  480112 command_runner.go:130] > Modify: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345489  480112 command_runner.go:130] > Change: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345493  480112 command_runner.go:130] >  Birth: -
	I1205 06:40:03.345525  480112 start.go:564] Will wait 60s for crictl version
	I1205 06:40:03.345579  480112 ssh_runner.go:195] Run: which crictl
	I1205 06:40:03.348931  480112 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:40:03.349401  480112 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:40:03.373825  480112 command_runner.go:130] > Version:  0.1.0
	I1205 06:40:03.373849  480112 command_runner.go:130] > RuntimeName:  cri-o
	I1205 06:40:03.373973  480112 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1205 06:40:03.374159  480112 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:40:03.376168  480112 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:40:03.376252  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.403613  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.403690  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.403710  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.403727  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.403756  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.403777  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.403795  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.403813  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.403844  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.403865  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.403879  480112 command_runner.go:130] >      static
	I1205 06:40:03.403895  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.403924  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.403945  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.403964  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.403979  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.404006  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.404027  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.404044  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.404059  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.406234  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.432776  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.432811  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.432836  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.432843  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.432849  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.432862  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.432872  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.432877  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.432886  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.432908  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.432916  480112 command_runner.go:130] >      static
	I1205 06:40:03.432920  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.432948  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.432956  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.432959  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.432963  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.432970  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.432998  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.433006  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.433010  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.440242  480112 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:40:03.443151  480112 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:40:03.459691  480112 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:40:03.463610  480112 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:40:03.463748  480112 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:40:03.463853  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:40:03.463910  480112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:40:03.497207  480112 command_runner.go:130] > {
	I1205 06:40:03.497226  480112 command_runner.go:130] >   "images":  [
	I1205 06:40:03.497231  480112 command_runner.go:130] >     {
	I1205 06:40:03.497239  480112 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:40:03.497244  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497250  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:40:03.497253  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497257  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497267  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1205 06:40:03.497271  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497276  480112 command_runner.go:130] >       "size":  "29035622",
	I1205 06:40:03.497279  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497283  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497286  480112 command_runner.go:130] >     },
	I1205 06:40:03.497290  480112 command_runner.go:130] >     {
	I1205 06:40:03.497297  480112 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:40:03.497301  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497306  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:40:03.497309  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497313  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497321  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1205 06:40:03.497324  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497328  480112 command_runner.go:130] >       "size":  "74488375",
	I1205 06:40:03.497332  480112 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:40:03.497336  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497340  480112 command_runner.go:130] >     },
	I1205 06:40:03.497343  480112 command_runner.go:130] >     {
	I1205 06:40:03.497350  480112 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:40:03.497354  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497359  480112 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:40:03.497362  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497366  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497388  480112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1205 06:40:03.497393  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497397  480112 command_runner.go:130] >       "size":  "60854229",
	I1205 06:40:03.497401  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497405  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497409  480112 command_runner.go:130] >       },
	I1205 06:40:03.497413  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497417  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497421  480112 command_runner.go:130] >     },
	I1205 06:40:03.497424  480112 command_runner.go:130] >     {
	I1205 06:40:03.497430  480112 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:40:03.497434  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497439  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:40:03.497442  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497446  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497454  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1205 06:40:03.497459  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497463  480112 command_runner.go:130] >       "size":  "84947242",
	I1205 06:40:03.497466  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497469  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497473  480112 command_runner.go:130] >       },
	I1205 06:40:03.497476  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497480  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497483  480112 command_runner.go:130] >     },
	I1205 06:40:03.497486  480112 command_runner.go:130] >     {
	I1205 06:40:03.497492  480112 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:40:03.497496  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497501  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:40:03.497505  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497509  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497517  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1205 06:40:03.497520  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497529  480112 command_runner.go:130] >       "size":  "72167568",
	I1205 06:40:03.497539  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497542  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497545  480112 command_runner.go:130] >       },
	I1205 06:40:03.497549  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497552  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497555  480112 command_runner.go:130] >     },
	I1205 06:40:03.497558  480112 command_runner.go:130] >     {
	I1205 06:40:03.497564  480112 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:40:03.497568  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497573  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:40:03.497575  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497579  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497588  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1205 06:40:03.497592  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497595  480112 command_runner.go:130] >       "size":  "74105124",
	I1205 06:40:03.497599  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497603  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497606  480112 command_runner.go:130] >     },
	I1205 06:40:03.497609  480112 command_runner.go:130] >     {
	I1205 06:40:03.497615  480112 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:40:03.497618  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497624  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:40:03.497627  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497630  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497638  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1205 06:40:03.497641  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497645  480112 command_runner.go:130] >       "size":  "49819792",
	I1205 06:40:03.497648  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497652  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497655  480112 command_runner.go:130] >       },
	I1205 06:40:03.497659  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497663  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497666  480112 command_runner.go:130] >     },
	I1205 06:40:03.497672  480112 command_runner.go:130] >     {
	I1205 06:40:03.497679  480112 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:40:03.497683  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497687  480112 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.497690  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497694  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497701  480112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1205 06:40:03.497705  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497708  480112 command_runner.go:130] >       "size":  "517328",
	I1205 06:40:03.497712  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497715  480112 command_runner.go:130] >         "value":  "65535"
	I1205 06:40:03.497718  480112 command_runner.go:130] >       },
	I1205 06:40:03.497722  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497726  480112 command_runner.go:130] >       "pinned":  true
	I1205 06:40:03.497729  480112 command_runner.go:130] >     }
	I1205 06:40:03.497732  480112 command_runner.go:130] >   ]
	I1205 06:40:03.497735  480112 command_runner.go:130] > }
	I1205 06:40:03.499390  480112 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:40:03.499408  480112 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:40:03.499417  480112 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:40:03.499515  480112 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:40:03.499587  480112 ssh_runner.go:195] Run: crio config
	I1205 06:40:03.548638  480112 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 06:40:03.548661  480112 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 06:40:03.548669  480112 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 06:40:03.548671  480112 command_runner.go:130] > #
	I1205 06:40:03.548686  480112 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 06:40:03.548693  480112 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 06:40:03.548700  480112 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 06:40:03.548716  480112 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 06:40:03.548720  480112 command_runner.go:130] > # reload'.
	I1205 06:40:03.548726  480112 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 06:40:03.548733  480112 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 06:40:03.548739  480112 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 06:40:03.548745  480112 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 06:40:03.548748  480112 command_runner.go:130] > [crio]
	I1205 06:40:03.548755  480112 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 06:40:03.548760  480112 command_runner.go:130] > # containers images, in this directory.
	I1205 06:40:03.549179  480112 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 06:40:03.549226  480112 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 06:40:03.549246  480112 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1205 06:40:03.549268  480112 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 06:40:03.549287  480112 command_runner.go:130] > # imagestore = ""
	I1205 06:40:03.549306  480112 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 06:40:03.549324  480112 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 06:40:03.549341  480112 command_runner.go:130] > # storage_driver = "overlay"
	I1205 06:40:03.549356  480112 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 06:40:03.549385  480112 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 06:40:03.549402  480112 command_runner.go:130] > # storage_option = [
	I1205 06:40:03.549417  480112 command_runner.go:130] > # ]
	I1205 06:40:03.549435  480112 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 06:40:03.549461  480112 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 06:40:03.549487  480112 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 06:40:03.549504  480112 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 06:40:03.549521  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 06:40:03.549545  480112 command_runner.go:130] > # always happen on a node reboot
	I1205 06:40:03.549737  480112 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 06:40:03.549768  480112 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 06:40:03.549775  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 06:40:03.549781  480112 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 06:40:03.549785  480112 command_runner.go:130] > # version_file_persist = ""
	I1205 06:40:03.549793  480112 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 06:40:03.549801  480112 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 06:40:03.549805  480112 command_runner.go:130] > # internal_wipe = true
	I1205 06:40:03.549813  480112 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 06:40:03.549818  480112 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 06:40:03.549822  480112 command_runner.go:130] > # internal_repair = true
	I1205 06:40:03.549828  480112 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 06:40:03.549834  480112 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 06:40:03.549840  480112 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 06:40:03.549845  480112 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 06:40:03.549854  480112 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 06:40:03.549858  480112 command_runner.go:130] > [crio.api]
	I1205 06:40:03.549863  480112 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 06:40:03.549867  480112 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 06:40:03.549872  480112 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 06:40:03.549876  480112 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 06:40:03.549883  480112 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 06:40:03.549889  480112 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 06:40:03.549892  480112 command_runner.go:130] > # stream_port = "0"
	I1205 06:40:03.549897  480112 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 06:40:03.549901  480112 command_runner.go:130] > # stream_enable_tls = false
	I1205 06:40:03.549907  480112 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 06:40:03.549911  480112 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 06:40:03.549917  480112 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 06:40:03.549923  480112 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549927  480112 command_runner.go:130] > # stream_tls_cert = ""
	I1205 06:40:03.549933  480112 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 06:40:03.549939  480112 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549942  480112 command_runner.go:130] > # stream_tls_key = ""
	I1205 06:40:03.549948  480112 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 06:40:03.549954  480112 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 06:40:03.549958  480112 command_runner.go:130] > # automatically pick up the changes.
	I1205 06:40:03.549962  480112 command_runner.go:130] > # stream_tls_ca = ""
	I1205 06:40:03.549979  480112 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549984  480112 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 06:40:03.549991  480112 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549996  480112 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 06:40:03.550002  480112 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 06:40:03.550007  480112 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 06:40:03.550010  480112 command_runner.go:130] > [crio.runtime]
	I1205 06:40:03.550016  480112 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 06:40:03.550021  480112 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 06:40:03.550025  480112 command_runner.go:130] > # "nofile=1024:2048"
	I1205 06:40:03.550034  480112 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 06:40:03.550038  480112 command_runner.go:130] > # default_ulimits = [
	I1205 06:40:03.550041  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550047  480112 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 06:40:03.550050  480112 command_runner.go:130] > # no_pivot = false
	I1205 06:40:03.550056  480112 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 06:40:03.550062  480112 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 06:40:03.550067  480112 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 06:40:03.550072  480112 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 06:40:03.550077  480112 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 06:40:03.550084  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550087  480112 command_runner.go:130] > # conmon = ""
	I1205 06:40:03.550092  480112 command_runner.go:130] > # Cgroup setting for conmon
	I1205 06:40:03.550099  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 06:40:03.550102  480112 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 06:40:03.550108  480112 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 06:40:03.550115  480112 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 06:40:03.550124  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550128  480112 command_runner.go:130] > # conmon_env = [
	I1205 06:40:03.550130  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550136  480112 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 06:40:03.550141  480112 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 06:40:03.550146  480112 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 06:40:03.550150  480112 command_runner.go:130] > # default_env = [
	I1205 06:40:03.550152  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550158  480112 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 06:40:03.550165  480112 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 06:40:03.550169  480112 command_runner.go:130] > # selinux = false
	I1205 06:40:03.550180  480112 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 06:40:03.550188  480112 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1205 06:40:03.550193  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550197  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.550202  480112 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1205 06:40:03.550212  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550216  480112 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1205 06:40:03.550223  480112 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 06:40:03.550229  480112 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 06:40:03.550235  480112 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 06:40:03.550241  480112 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 06:40:03.550246  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550250  480112 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 06:40:03.550255  480112 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 06:40:03.550259  480112 command_runner.go:130] > # the cgroup blockio controller.
	I1205 06:40:03.550263  480112 command_runner.go:130] > # blockio_config_file = ""
	I1205 06:40:03.550269  480112 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 06:40:03.550273  480112 command_runner.go:130] > # blockio parameters.
	I1205 06:40:03.550277  480112 command_runner.go:130] > # blockio_reload = false
	I1205 06:40:03.550284  480112 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 06:40:03.550287  480112 command_runner.go:130] > # irqbalance daemon.
	I1205 06:40:03.550292  480112 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 06:40:03.550298  480112 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 06:40:03.550305  480112 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 06:40:03.550313  480112 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 06:40:03.550319  480112 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 06:40:03.550325  480112 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 06:40:03.550330  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550333  480112 command_runner.go:130] > # rdt_config_file = ""
	I1205 06:40:03.550338  480112 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 06:40:03.550342  480112 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 06:40:03.550348  480112 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 06:40:03.550711  480112 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 06:40:03.550724  480112 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 06:40:03.550731  480112 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 06:40:03.550734  480112 command_runner.go:130] > # will be added.
	I1205 06:40:03.550738  480112 command_runner.go:130] > # default_capabilities = [
	I1205 06:40:03.550742  480112 command_runner.go:130] > # 	"CHOWN",
	I1205 06:40:03.550746  480112 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 06:40:03.550749  480112 command_runner.go:130] > # 	"FSETID",
	I1205 06:40:03.550752  480112 command_runner.go:130] > # 	"FOWNER",
	I1205 06:40:03.550756  480112 command_runner.go:130] > # 	"SETGID",
	I1205 06:40:03.550759  480112 command_runner.go:130] > # 	"SETUID",
	I1205 06:40:03.550782  480112 command_runner.go:130] > # 	"SETPCAP",
	I1205 06:40:03.550786  480112 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 06:40:03.550789  480112 command_runner.go:130] > # 	"KILL",
	I1205 06:40:03.550792  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550800  480112 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 06:40:03.550810  480112 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 06:40:03.550815  480112 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 06:40:03.550821  480112 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 06:40:03.550827  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550831  480112 command_runner.go:130] > default_sysctls = [
	I1205 06:40:03.550835  480112 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 06:40:03.550838  480112 command_runner.go:130] > ]
	I1205 06:40:03.550842  480112 command_runner.go:130] > # List of devices on the host that a
	I1205 06:40:03.550849  480112 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 06:40:03.550852  480112 command_runner.go:130] > # allowed_devices = [
	I1205 06:40:03.550856  480112 command_runner.go:130] > # 	"/dev/fuse",
	I1205 06:40:03.550859  480112 command_runner.go:130] > # 	"/dev/net/tun",
	I1205 06:40:03.550863  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550867  480112 command_runner.go:130] > # List of additional devices. specified as
	I1205 06:40:03.550875  480112 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 06:40:03.550880  480112 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 06:40:03.550886  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550889  480112 command_runner.go:130] > # additional_devices = [
	I1205 06:40:03.550894  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550899  480112 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 06:40:03.550905  480112 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 06:40:03.550909  480112 command_runner.go:130] > # 	"/etc/cdi",
	I1205 06:40:03.550912  480112 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 06:40:03.550915  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550921  480112 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 06:40:03.550927  480112 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 06:40:03.550931  480112 command_runner.go:130] > # Defaults to false.
	I1205 06:40:03.550936  480112 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 06:40:03.550942  480112 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 06:40:03.550949  480112 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 06:40:03.550952  480112 command_runner.go:130] > # hooks_dir = [
	I1205 06:40:03.550956  480112 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 06:40:03.550962  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550972  480112 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 06:40:03.550979  480112 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 06:40:03.550984  480112 command_runner.go:130] > # its default mounts from the following two files:
	I1205 06:40:03.550987  480112 command_runner.go:130] > #
	I1205 06:40:03.550993  480112 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 06:40:03.550999  480112 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 06:40:03.551004  480112 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 06:40:03.551007  480112 command_runner.go:130] > #
	I1205 06:40:03.551013  480112 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 06:40:03.551019  480112 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 06:40:03.551025  480112 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 06:40:03.551030  480112 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 06:40:03.551032  480112 command_runner.go:130] > #
	I1205 06:40:03.551036  480112 command_runner.go:130] > # default_mounts_file = ""
	I1205 06:40:03.551041  480112 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 06:40:03.551047  480112 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 06:40:03.551051  480112 command_runner.go:130] > # pids_limit = -1
	I1205 06:40:03.551057  480112 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 06:40:03.551063  480112 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 06:40:03.551069  480112 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 06:40:03.551077  480112 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 06:40:03.551080  480112 command_runner.go:130] > # log_size_max = -1
	I1205 06:40:03.551087  480112 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 06:40:03.551091  480112 command_runner.go:130] > # log_to_journald = false
	I1205 06:40:03.551098  480112 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 06:40:03.551103  480112 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 06:40:03.551108  480112 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 06:40:03.551113  480112 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 06:40:03.551118  480112 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 06:40:03.551121  480112 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 06:40:03.551127  480112 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 06:40:03.551131  480112 command_runner.go:130] > # read_only = false
	I1205 06:40:03.551137  480112 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 06:40:03.551147  480112 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 06:40:03.551151  480112 command_runner.go:130] > # live configuration reload.
	I1205 06:40:03.551154  480112 command_runner.go:130] > # log_level = "info"
	I1205 06:40:03.551160  480112 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 06:40:03.551164  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.551168  480112 command_runner.go:130] > # log_filter = ""
	I1205 06:40:03.551174  480112 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551180  480112 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 06:40:03.551184  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551192  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551196  480112 command_runner.go:130] > # uid_mappings = ""
	I1205 06:40:03.551201  480112 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551208  480112 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 06:40:03.551212  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551219  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551223  480112 command_runner.go:130] > # gid_mappings = ""
	I1205 06:40:03.551229  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 06:40:03.551235  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551241  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551249  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551253  480112 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 06:40:03.551259  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 06:40:03.551264  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551271  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551278  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551282  480112 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 06:40:03.551288  480112 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 06:40:03.551296  480112 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 06:40:03.551302  480112 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 06:40:03.551306  480112 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 06:40:03.551311  480112 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 06:40:03.551317  480112 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 06:40:03.551322  480112 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 06:40:03.551330  480112 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 06:40:03.551333  480112 command_runner.go:130] > # drop_infra_ctr = true
	I1205 06:40:03.551340  480112 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 06:40:03.551346  480112 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 06:40:03.551353  480112 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 06:40:03.551357  480112 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 06:40:03.551364  480112 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 06:40:03.551370  480112 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 06:40:03.551375  480112 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 06:40:03.551380  480112 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 06:40:03.551384  480112 command_runner.go:130] > # shared_cpuset = ""
	I1205 06:40:03.551390  480112 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 06:40:03.551395  480112 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 06:40:03.551398  480112 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 06:40:03.551405  480112 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 06:40:03.551408  480112 command_runner.go:130] > # pinns_path = ""
	I1205 06:40:03.551414  480112 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 06:40:03.551420  480112 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 06:40:03.551424  480112 command_runner.go:130] > # enable_criu_support = true
	I1205 06:40:03.551428  480112 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 06:40:03.551434  480112 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 06:40:03.551438  480112 command_runner.go:130] > # enable_pod_events = false
	I1205 06:40:03.551444  480112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 06:40:03.551449  480112 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 06:40:03.551453  480112 command_runner.go:130] > # default_runtime = "crun"
	I1205 06:40:03.551458  480112 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 06:40:03.551466  480112 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 06:40:03.551475  480112 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 06:40:03.551480  480112 command_runner.go:130] > # creation as a file is not desired either.
	I1205 06:40:03.551488  480112 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 06:40:03.551495  480112 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 06:40:03.551499  480112 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 06:40:03.551502  480112 command_runner.go:130] > # ]
	I1205 06:40:03.551511  480112 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 06:40:03.551518  480112 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 06:40:03.551524  480112 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 06:40:03.551528  480112 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 06:40:03.551532  480112 command_runner.go:130] > #
	I1205 06:40:03.551536  480112 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 06:40:03.551541  480112 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 06:40:03.551544  480112 command_runner.go:130] > # runtime_type = "oci"
	I1205 06:40:03.551549  480112 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 06:40:03.551553  480112 command_runner.go:130] > # inherit_default_runtime = false
	I1205 06:40:03.551558  480112 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 06:40:03.551562  480112 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 06:40:03.551566  480112 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 06:40:03.551570  480112 command_runner.go:130] > # monitor_env = []
	I1205 06:40:03.551574  480112 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 06:40:03.551578  480112 command_runner.go:130] > # allowed_annotations = []
	I1205 06:40:03.551583  480112 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 06:40:03.551587  480112 command_runner.go:130] > # no_sync_log = false
	I1205 06:40:03.551590  480112 command_runner.go:130] > # default_annotations = {}
	I1205 06:40:03.551594  480112 command_runner.go:130] > # stream_websockets = false
	I1205 06:40:03.551598  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.551631  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.551636  480112 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 06:40:03.551643  480112 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 06:40:03.551649  480112 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 06:40:03.551656  480112 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 06:40:03.551659  480112 command_runner.go:130] > #   in $PATH.
	I1205 06:40:03.551665  480112 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 06:40:03.551669  480112 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 06:40:03.551675  480112 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 06:40:03.551678  480112 command_runner.go:130] > #   state.
	I1205 06:40:03.551685  480112 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 06:40:03.551690  480112 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 06:40:03.551699  480112 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1205 06:40:03.551706  480112 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1205 06:40:03.551711  480112 command_runner.go:130] > #   the values from the default runtime on load time.
	I1205 06:40:03.551717  480112 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 06:40:03.551723  480112 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 06:40:03.551730  480112 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 06:40:03.551736  480112 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 06:40:03.551740  480112 command_runner.go:130] > #   The currently recognized values are:
	I1205 06:40:03.551747  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 06:40:03.551754  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 06:40:03.551761  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 06:40:03.551767  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 06:40:03.551774  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 06:40:03.551781  480112 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 06:40:03.551788  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 06:40:03.551794  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 06:40:03.551800  480112 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 06:40:03.551807  480112 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1205 06:40:03.551813  480112 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1205 06:40:03.551819  480112 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1205 06:40:03.551828  480112 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1205 06:40:03.551834  480112 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1205 06:40:03.551840  480112 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1205 06:40:03.551848  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1205 06:40:03.551854  480112 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 06:40:03.551858  480112 command_runner.go:130] > #   deprecated option "conmon".
	I1205 06:40:03.551865  480112 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 06:40:03.551870  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 06:40:03.551877  480112 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 06:40:03.551882  480112 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 06:40:03.551888  480112 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1205 06:40:03.551893  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 06:40:03.551900  480112 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1205 06:40:03.551907  480112 command_runner.go:130] > #   conmon-rs by using:
	I1205 06:40:03.551915  480112 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1205 06:40:03.551924  480112 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1205 06:40:03.551931  480112 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1205 06:40:03.551937  480112 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 06:40:03.551943  480112 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 06:40:03.551950  480112 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1205 06:40:03.551958  480112 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1205 06:40:03.551964  480112 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1205 06:40:03.551971  480112 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1205 06:40:03.551979  480112 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1205 06:40:03.551983  480112 command_runner.go:130] > #   when a machine crash happens.
	I1205 06:40:03.551990  480112 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1205 06:40:03.551997  480112 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1205 06:40:03.552005  480112 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1205 06:40:03.552009  480112 command_runner.go:130] > #   seccomp profile for the runtime.
	I1205 06:40:03.552015  480112 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1205 06:40:03.552022  480112 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1205 06:40:03.552025  480112 command_runner.go:130] > #
	I1205 06:40:03.552029  480112 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 06:40:03.552032  480112 command_runner.go:130] > #
	I1205 06:40:03.552038  480112 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 06:40:03.552044  480112 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 06:40:03.552046  480112 command_runner.go:130] > #
	I1205 06:40:03.552053  480112 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 06:40:03.552058  480112 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 06:40:03.552061  480112 command_runner.go:130] > #
	I1205 06:40:03.552067  480112 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 06:40:03.552070  480112 command_runner.go:130] > # feature.
	I1205 06:40:03.552072  480112 command_runner.go:130] > #
	I1205 06:40:03.552078  480112 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 06:40:03.552085  480112 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 06:40:03.552090  480112 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 06:40:03.552104  480112 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 06:40:03.552111  480112 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 06:40:03.552114  480112 command_runner.go:130] > #
	I1205 06:40:03.552121  480112 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 06:40:03.552127  480112 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 06:40:03.552129  480112 command_runner.go:130] > #
	I1205 06:40:03.552135  480112 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 06:40:03.552141  480112 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 06:40:03.552144  480112 command_runner.go:130] > #
	I1205 06:40:03.552150  480112 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 06:40:03.552156  480112 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 06:40:03.552159  480112 command_runner.go:130] > # limitation.
	I1205 06:40:03.552163  480112 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1205 06:40:03.552167  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1205 06:40:03.552170  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552174  480112 command_runner.go:130] > runtime_root = "/run/crun"
	I1205 06:40:03.552178  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552182  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552188  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552193  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552197  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552200  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552204  480112 command_runner.go:130] > allowed_annotations = [
	I1205 06:40:03.552208  480112 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1205 06:40:03.552211  480112 command_runner.go:130] > ]
	I1205 06:40:03.552215  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552219  480112 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 06:40:03.552223  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1205 06:40:03.552226  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552230  480112 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 06:40:03.552234  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552237  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552241  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552248  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552252  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552256  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552260  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552267  480112 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 06:40:03.552272  480112 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 06:40:03.552278  480112 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 06:40:03.552286  480112 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 06:40:03.552300  480112 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1205 06:40:03.552310  480112 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1205 06:40:03.552319  480112 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1205 06:40:03.552324  480112 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 06:40:03.552334  480112 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 06:40:03.552342  480112 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 06:40:03.552349  480112 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 06:40:03.552356  480112 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 06:40:03.552359  480112 command_runner.go:130] > # Example:
	I1205 06:40:03.552364  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 06:40:03.552368  480112 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 06:40:03.552373  480112 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 06:40:03.552382  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 06:40:03.552385  480112 command_runner.go:130] > # cpuset = "0-1"
	I1205 06:40:03.552389  480112 command_runner.go:130] > # cpushares = "5"
	I1205 06:40:03.552392  480112 command_runner.go:130] > # cpuquota = "1000"
	I1205 06:40:03.552396  480112 command_runner.go:130] > # cpuperiod = "100000"
	I1205 06:40:03.552399  480112 command_runner.go:130] > # cpulimit = "35"
	I1205 06:40:03.552402  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.552406  480112 command_runner.go:130] > # The workload name is workload-type.
	I1205 06:40:03.552413  480112 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 06:40:03.552419  480112 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 06:40:03.552424  480112 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 06:40:03.552432  480112 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 06:40:03.552438  480112 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 06:40:03.552445  480112 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 06:40:03.552452  480112 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 06:40:03.552456  480112 command_runner.go:130] > # Default value is set to true
	I1205 06:40:03.552461  480112 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 06:40:03.552466  480112 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 06:40:03.552471  480112 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 06:40:03.552475  480112 command_runner.go:130] > # Default value is set to 'false'
	I1205 06:40:03.552479  480112 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 06:40:03.552484  480112 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1205 06:40:03.552492  480112 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1205 06:40:03.552495  480112 command_runner.go:130] > # timezone = ""
	I1205 06:40:03.552502  480112 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 06:40:03.552504  480112 command_runner.go:130] > #
	I1205 06:40:03.552510  480112 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 06:40:03.552517  480112 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1205 06:40:03.552520  480112 command_runner.go:130] > [crio.image]
	I1205 06:40:03.552526  480112 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 06:40:03.552530  480112 command_runner.go:130] > # default_transport = "docker://"
	I1205 06:40:03.552536  480112 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 06:40:03.552543  480112 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552547  480112 command_runner.go:130] > # global_auth_file = ""
	I1205 06:40:03.552552  480112 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 06:40:03.552557  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552561  480112 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.552568  480112 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 06:40:03.552574  480112 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552581  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552585  480112 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 06:40:03.552591  480112 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 06:40:03.552597  480112 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 06:40:03.552603  480112 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 06:40:03.552608  480112 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 06:40:03.552612  480112 command_runner.go:130] > # pause_command = "/pause"
	I1205 06:40:03.552622  480112 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 06:40:03.552628  480112 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 06:40:03.552641  480112 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 06:40:03.552646  480112 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 06:40:03.552652  480112 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 06:40:03.552658  480112 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 06:40:03.552661  480112 command_runner.go:130] > # pinned_images = [
	I1205 06:40:03.552664  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552670  480112 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 06:40:03.552675  480112 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 06:40:03.552681  480112 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 06:40:03.552687  480112 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 06:40:03.552692  480112 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 06:40:03.552697  480112 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1205 06:40:03.552702  480112 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 06:40:03.552708  480112 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 06:40:03.552716  480112 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 06:40:03.552722  480112 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 06:40:03.552728  480112 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 06:40:03.552733  480112 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 06:40:03.552738  480112 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 06:40:03.552746  480112 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 06:40:03.552749  480112 command_runner.go:130] > # changing them here.
	I1205 06:40:03.552755  480112 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1205 06:40:03.552758  480112 command_runner.go:130] > # insecure_registries = [
	I1205 06:40:03.552761  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552767  480112 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 06:40:03.552772  480112 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 06:40:03.552776  480112 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 06:40:03.552780  480112 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 06:40:03.553031  480112 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 06:40:03.553083  480112 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1205 06:40:03.553106  480112 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1205 06:40:03.553125  480112 command_runner.go:130] > # auto_reload_registries = false
	I1205 06:40:03.553145  480112 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1205 06:40:03.553166  480112 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1205 06:40:03.553207  480112 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1205 06:40:03.553227  480112 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1205 06:40:03.553245  480112 command_runner.go:130] > # The mode of short name resolution.
	I1205 06:40:03.553268  480112 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1205 06:40:03.553288  480112 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1205 06:40:03.553305  480112 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1205 06:40:03.553320  480112 command_runner.go:130] > # short_name_mode = "enforcing"
	I1205 06:40:03.553338  480112 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1205 06:40:03.553365  480112 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1205 06:40:03.553538  480112 command_runner.go:130] > # oci_artifact_mount_support = true
	I1205 06:40:03.553551  480112 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 06:40:03.553555  480112 command_runner.go:130] > # CNI plugins.
	I1205 06:40:03.553559  480112 command_runner.go:130] > [crio.network]
	I1205 06:40:03.553564  480112 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 06:40:03.553570  480112 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 06:40:03.553574  480112 command_runner.go:130] > # cni_default_network = ""
	I1205 06:40:03.553580  480112 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 06:40:03.553587  480112 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 06:40:03.553592  480112 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 06:40:03.553597  480112 command_runner.go:130] > # plugin_dirs = [
	I1205 06:40:03.553600  480112 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 06:40:03.553603  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553607  480112 command_runner.go:130] > # List of included pod metrics.
	I1205 06:40:03.553616  480112 command_runner.go:130] > # included_pod_metrics = [
	I1205 06:40:03.553620  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553625  480112 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 06:40:03.553628  480112 command_runner.go:130] > [crio.metrics]
	I1205 06:40:03.553634  480112 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 06:40:03.553637  480112 command_runner.go:130] > # enable_metrics = false
	I1205 06:40:03.553641  480112 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 06:40:03.553646  480112 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 06:40:03.553655  480112 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 06:40:03.553661  480112 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 06:40:03.553670  480112 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 06:40:03.553675  480112 command_runner.go:130] > # metrics_collectors = [
	I1205 06:40:03.553679  480112 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 06:40:03.553683  480112 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 06:40:03.553687  480112 command_runner.go:130] > # 	"containers_oom_total",
	I1205 06:40:03.553691  480112 command_runner.go:130] > # 	"processes_defunct",
	I1205 06:40:03.553695  480112 command_runner.go:130] > # 	"operations_total",
	I1205 06:40:03.553699  480112 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 06:40:03.553703  480112 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 06:40:03.553707  480112 command_runner.go:130] > # 	"operations_errors_total",
	I1205 06:40:03.553711  480112 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 06:40:03.553715  480112 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 06:40:03.553719  480112 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 06:40:03.553723  480112 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 06:40:03.553727  480112 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 06:40:03.553731  480112 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 06:40:03.553736  480112 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 06:40:03.553740  480112 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 06:40:03.553744  480112 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1205 06:40:03.553747  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553753  480112 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1205 06:40:03.553758  480112 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1205 06:40:03.553763  480112 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 06:40:03.553767  480112 command_runner.go:130] > # metrics_port = 9090
	I1205 06:40:03.553772  480112 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 06:40:03.553775  480112 command_runner.go:130] > # metrics_socket = ""
	I1205 06:40:03.553780  480112 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 06:40:03.553786  480112 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 06:40:03.553792  480112 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 06:40:03.553798  480112 command_runner.go:130] > # certificate on any modification event.
	I1205 06:40:03.553802  480112 command_runner.go:130] > # metrics_cert = ""
	I1205 06:40:03.553807  480112 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 06:40:03.553812  480112 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 06:40:03.553822  480112 command_runner.go:130] > # metrics_key = ""
	I1205 06:40:03.553828  480112 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 06:40:03.553831  480112 command_runner.go:130] > [crio.tracing]
	I1205 06:40:03.553836  480112 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 06:40:03.553841  480112 command_runner.go:130] > # enable_tracing = false
	I1205 06:40:03.553846  480112 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 06:40:03.553850  480112 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1205 06:40:03.553857  480112 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 06:40:03.553861  480112 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 06:40:03.553865  480112 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 06:40:03.553868  480112 command_runner.go:130] > [crio.nri]
	I1205 06:40:03.553872  480112 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 06:40:03.553876  480112 command_runner.go:130] > # enable_nri = true
	I1205 06:40:03.553880  480112 command_runner.go:130] > # NRI socket to listen on.
	I1205 06:40:03.553884  480112 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 06:40:03.553888  480112 command_runner.go:130] > # NRI plugin directory to use.
	I1205 06:40:03.553893  480112 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 06:40:03.553898  480112 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 06:40:03.553902  480112 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 06:40:03.553908  480112 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 06:40:03.553979  480112 command_runner.go:130] > # nri_disable_connections = false
	I1205 06:40:03.553985  480112 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 06:40:03.553990  480112 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 06:40:03.553995  480112 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 06:40:03.554000  480112 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 06:40:03.554004  480112 command_runner.go:130] > # NRI default validator configuration.
	I1205 06:40:03.554011  480112 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1205 06:40:03.554017  480112 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1205 06:40:03.554021  480112 command_runner.go:130] > # can be restricted/rejected:
	I1205 06:40:03.554025  480112 command_runner.go:130] > # - OCI hook injection
	I1205 06:40:03.554030  480112 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1205 06:40:03.554035  480112 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1205 06:40:03.554039  480112 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1205 06:40:03.554047  480112 command_runner.go:130] > # - adjustment of linux namespaces
	I1205 06:40:03.554054  480112 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1205 06:40:03.554060  480112 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1205 06:40:03.554066  480112 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1205 06:40:03.554070  480112 command_runner.go:130] > #
	I1205 06:40:03.554075  480112 command_runner.go:130] > # [crio.nri.default_validator]
	I1205 06:40:03.554079  480112 command_runner.go:130] > # nri_enable_default_validator = false
	I1205 06:40:03.554084  480112 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1205 06:40:03.554090  480112 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1205 06:40:03.554095  480112 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1205 06:40:03.554101  480112 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1205 06:40:03.554106  480112 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1205 06:40:03.554110  480112 command_runner.go:130] > # nri_validator_required_plugins = [
	I1205 06:40:03.554113  480112 command_runner.go:130] > # ]
	I1205 06:40:03.554118  480112 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1205 06:40:03.554124  480112 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 06:40:03.554127  480112 command_runner.go:130] > [crio.stats]
	I1205 06:40:03.554133  480112 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 06:40:03.554138  480112 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 06:40:03.554142  480112 command_runner.go:130] > # stats_collection_period = 0
	I1205 06:40:03.554148  480112 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1205 06:40:03.554154  480112 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1205 06:40:03.554158  480112 command_runner.go:130] > # collection_period = 0
	I1205 06:40:03.556162  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527241832Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1205 06:40:03.556207  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527278608Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1205 06:40:03.556230  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527308122Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1205 06:40:03.556255  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.52733264Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1205 06:40:03.556280  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527409367Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.556295  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527814951Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1205 06:40:03.556306  480112 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 06:40:03.556383  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:40:03.556397  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:40:03.556420  480112 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:40:03.556447  480112 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:40:03.556582  480112 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:40:03.556659  480112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:40:03.563611  480112 command_runner.go:130] > kubeadm
	I1205 06:40:03.563630  480112 command_runner.go:130] > kubectl
	I1205 06:40:03.563636  480112 command_runner.go:130] > kubelet
	I1205 06:40:03.564590  480112 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:40:03.564681  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:40:03.572146  480112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:40:03.584914  480112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:40:03.598402  480112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 06:40:03.610806  480112 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:40:03.614247  480112 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:40:03.614336  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.749526  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:04.526831  480112 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:40:04.526920  480112 certs.go:195] generating shared ca certs ...
	I1205 06:40:04.526970  480112 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:04.527146  480112 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:40:04.527262  480112 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:40:04.527298  480112 certs.go:257] generating profile certs ...
	I1205 06:40:04.527454  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:40:04.527572  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:40:04.527654  480112 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:40:04.527683  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:40:04.527717  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:40:04.527750  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:40:04.527779  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:40:04.527812  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:40:04.527845  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:40:04.527901  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:40:04.527942  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:40:04.528018  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:40:04.528084  480112 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:40:04.528110  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:40:04.528175  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:40:04.528223  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:40:04.528266  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:40:04.528351  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:04.528416  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.528448  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem -> /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.528484  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.529122  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:40:04.549434  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:40:04.568942  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:40:04.588032  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:40:04.616779  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:40:04.636137  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:40:04.655504  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:40:04.673755  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:40:04.692822  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:40:04.711199  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:40:04.730794  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:40:04.748559  480112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:40:04.762229  480112 ssh_runner.go:195] Run: openssl version
	I1205 06:40:04.768327  480112 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:40:04.768697  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.776287  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:40:04.784133  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788189  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788221  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788277  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.829541  480112 command_runner.go:130] > b5213941
	I1205 06:40:04.829985  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:40:04.837884  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.845797  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:40:04.853974  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.857841  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858230  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858295  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.900152  480112 command_runner.go:130] > 51391683
	I1205 06:40:04.900696  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:40:04.908660  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.916381  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:40:04.924345  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928449  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928489  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928538  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.969475  480112 command_runner.go:130] > 3ec20f2e
	I1205 06:40:04.969979  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:40:04.977627  480112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981676  480112 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981703  480112 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:40:04.981710  480112 command_runner.go:130] > Device: 259,1	Inode: 1046940     Links: 1
	I1205 06:40:04.981717  480112 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:04.981724  480112 command_runner.go:130] > Access: 2025-12-05 06:35:56.052204819 +0000
	I1205 06:40:04.981729  480112 command_runner.go:130] > Modify: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981735  480112 command_runner.go:130] > Change: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981741  480112 command_runner.go:130] >  Birth: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981812  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:40:05.025511  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.026281  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:40:05.067472  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.067923  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:40:05.109199  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.110439  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:40:05.151291  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.151789  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:40:05.192630  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.193112  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:40:05.234917  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.235493  480112 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:40:05.235576  480112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:40:05.235658  480112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:40:05.274773  480112 cri.go:89] found id: ""
	I1205 06:40:05.274854  480112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:40:05.284543  480112 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:40:05.284569  480112 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:40:05.284576  480112 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:40:05.284587  480112 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:40:05.284593  480112 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:40:05.284641  480112 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:40:05.293745  480112 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:40:05.294169  480112 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-787602" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.294277  480112 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-441321/kubeconfig needs updating (will repair): [kubeconfig missing "functional-787602" cluster setting kubeconfig missing "functional-787602" context setting]
	I1205 06:40:05.294658  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.295082  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.295239  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.295723  480112 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:40:05.295760  480112 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:40:05.295766  480112 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:40:05.295771  480112 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:40:05.295779  480112 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:40:05.296148  480112 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:40:05.296228  480112 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:40:05.305058  480112 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:40:05.305103  480112 kubeadm.go:602] duration metric: took 20.504477ms to restartPrimaryControlPlane
	I1205 06:40:05.305113  480112 kubeadm.go:403] duration metric: took 69.632192ms to StartCluster
	I1205 06:40:05.305127  480112 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305185  480112 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.305773  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305969  480112 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:40:05.306285  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:05.306340  480112 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:40:05.306433  480112 addons.go:70] Setting storage-provisioner=true in profile "functional-787602"
	I1205 06:40:05.306448  480112 addons.go:239] Setting addon storage-provisioner=true in "functional-787602"
	I1205 06:40:05.306452  480112 addons.go:70] Setting default-storageclass=true in profile "functional-787602"
	I1205 06:40:05.306473  480112 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-787602"
	I1205 06:40:05.306480  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.306771  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.306997  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.310651  480112 out.go:179] * Verifying Kubernetes components...
	I1205 06:40:05.313979  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:05.339795  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.340007  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.340282  480112 addons.go:239] Setting addon default-storageclass=true in "functional-787602"
	I1205 06:40:05.340312  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.340728  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.361959  480112 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:40:05.364893  480112 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.364921  480112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:40:05.364987  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.384451  480112 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:05.384479  480112 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:40:05.384563  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.411372  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.432092  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.510112  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:05.550609  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.557147  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.275527  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275618  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275677  480112 retry.go:31] will retry after 247.926554ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275753  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275786  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275814  480112 retry.go:31] will retry after 139.276641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275869  480112 node_ready.go:35] waiting up to 6m0s for node "functional-787602" to be "Ready" ...
	I1205 06:40:06.275986  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.276069  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.276382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.415646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.474935  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.474981  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.475001  480112 retry.go:31] will retry after 366.421161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.524197  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.584795  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.584843  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.584873  480112 retry.go:31] will retry after 312.76439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.776120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.776655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.841962  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.898526  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.904086  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.904127  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.904149  480112 retry.go:31] will retry after 740.273906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.959857  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.963461  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.963497  480112 retry.go:31] will retry after 759.965783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.276975  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.277072  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.277469  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.645230  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:07.705790  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.705833  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.705854  480112 retry.go:31] will retry after 642.466008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.724048  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:07.776045  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.776157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.791584  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.795338  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.795382  480112 retry.go:31] will retry after 614.279076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:08.276605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:08.348828  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:08.405271  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.408500  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.408576  480112 retry.go:31] will retry after 1.343995427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.410740  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:08.473489  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.473541  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.473564  480112 retry.go:31] will retry after 1.078913702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.777094  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.777222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.777651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.276356  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.276453  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.276780  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.553646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:09.614016  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.614089  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.614116  480112 retry.go:31] will retry after 2.379780781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.753405  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:09.777031  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.777132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.777482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.813171  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.813239  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.813272  480112 retry.go:31] will retry after 1.978465808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:10.276816  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.277257  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:10.277348  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:10.776020  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.776102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.776363  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.276081  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.276155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.791876  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:11.850961  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:11.851011  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.851047  480112 retry.go:31] will retry after 1.715194365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.994161  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:12.058032  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:12.058079  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.058098  480112 retry.go:31] will retry after 2.989540966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.276377  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.276451  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.276701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:12.776111  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:12.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:13.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.276532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:13.567026  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:13.620219  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:13.623514  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.623554  480112 retry.go:31] will retry after 5.458226005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.776806  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.776876  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.777207  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.277043  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.277126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.277411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:14.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:15.048089  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:15.111053  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:15.111091  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.111112  480112 retry.go:31] will retry after 5.631155228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.276375  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.276443  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:15.776648  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.777039  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.276857  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.276930  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.776968  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.777300  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:16.777347  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:17.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.276495  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:17.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.276064  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.276137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.276439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:19.082075  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:19.143244  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:19.143293  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.143314  480112 retry.go:31] will retry after 4.646546475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.276638  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.276712  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.277087  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:19.277141  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:19.776926  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.777007  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.777341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.743196  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:20.776726  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.776805  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.777070  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.801108  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:20.801144  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:20.801162  480112 retry.go:31] will retry after 9.136671028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:21.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.276973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.277268  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:21.277311  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:21.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.276249  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.776313  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.776619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.276136  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.776172  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.776265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:23.776664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:23.790980  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:23.852305  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:23.852351  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:23.852373  480112 retry.go:31] will retry after 4.852638111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:24.276878  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.276951  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.277225  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:24.776145  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.276631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.776628  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.776885  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:25.776924  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:26.276685  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.276766  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.277101  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:26.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.777045  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.777350  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.277008  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.277082  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.776144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:28.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:28.276571  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:28.705256  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:28.766465  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:28.766519  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.766541  480112 retry.go:31] will retry after 15.718503653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.777014  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.276890  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.276967  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.277333  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.776501  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.776578  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.776920  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.938493  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:30.002212  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:30.002257  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.002277  480112 retry.go:31] will retry after 5.082732051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.276542  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.276613  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.276880  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:30.276935  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:30.776666  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.777100  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.276768  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.276846  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.277194  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.776934  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.777009  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.777271  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.276395  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.276491  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.276813  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.776164  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.776245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:32.776649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:33.776151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.276378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.776431  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.776497  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.776750  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:34.776788  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:35.085301  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:35.148531  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:35.152882  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.152918  480112 retry.go:31] will retry after 11.086200752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.276603  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:35.777106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.777182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.777443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:37.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.276271  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:37.276633  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:37.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.776452  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.276021  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.276100  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.276361  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.776193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:39.776575  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:40.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:40.776012  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.776078  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.276540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.776119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.776197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:42.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:42.276631  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:42.776277  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.776365  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.776691  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.776091  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.276566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.485984  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:44.554072  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:44.557893  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.557927  480112 retry.go:31] will retry after 22.628614414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.776735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:44.776781  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:45.276131  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:45.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.239320  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:46.276723  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.277080  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.296820  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:46.296888  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.296909  480112 retry.go:31] will retry after 16.475007469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.776108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.776261  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:47.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.276550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:47.276621  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.276087  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.276413  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.776153  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:49.276252  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:49.276666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:49.776457  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.776539  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.776814  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.276579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.776648  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.276069  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:51.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:52.276278  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.276689  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:52.776334  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.776409  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.776733  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.276508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:54.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.276515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:54.276568  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:54.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.776530  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.776853  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.276690  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.276781  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.277125  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.777039  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.777119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.777385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.276092  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.276176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.276480  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:56.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:57.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:57.776222  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.776307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.776615  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.276119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.776243  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.776317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:58.776608  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:59.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.276178  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:59.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.776216  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.776291  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.776616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:00.776689  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:01.276381  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.276456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.276781  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:01.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.772181  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:02.776748  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.776818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.777092  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:02.777132  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:02.828748  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:02.831873  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:02.831907  480112 retry.go:31] will retry after 23.767145255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:03.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.276184  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:03.776136  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.276224  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.276300  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.776644  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.776715  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.777004  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:05.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.276924  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.277214  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:05.277261  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:05.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.776532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.276212  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.776196  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.776601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:07.187370  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:07.246801  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:07.246844  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.246863  480112 retry.go:31] will retry after 35.018877023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.277002  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.277431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:07.277488  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:07.777040  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.777122  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.777377  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.776342  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.776663  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.276083  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.276449  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.776565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:09.776619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:10.276305  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.276400  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.276764  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:10.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.776250  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.776174  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:12.276067  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.276164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.276478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:12.276527  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:12.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.776235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.776538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.776688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:14.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.276498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:14.276544  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:14.776236  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.776308  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.776593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.276150  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.276414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.276430  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.776438  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:16.776478  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:17.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.276289  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.276578  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:17.776257  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.776333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.776671  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.276301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.276556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.776250  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.776326  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.776636  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:18.776691  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:19.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.276800  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:19.776580  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.776660  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.276771  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.276848  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.277227  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.777060  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.777195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.777544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:20.777604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:21.276075  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.276451  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:21.776144  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.276241  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.276600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.776229  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.776301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.776581  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:23.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.276514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:23.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:23.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:25.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.276273  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:25.276662  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:25.776023  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.776090  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.776414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.276503  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.599995  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:26.657664  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660860  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660976  480112 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:26.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.776182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.276161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.276457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:27.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:28.776230  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.776304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.776618  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.276325  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.276412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.276735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.776649  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.777083  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:29.777135  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:30.276977  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.277054  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.277385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:30.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.776171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.276794  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.276886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.277179  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.776939  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.777016  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.777293  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:31.777332  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:32.276045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:32.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.276086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:34.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.276702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:34.276756  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:34.776367  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.776450  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.776713  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.276380  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.276460  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.276788  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.776775  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.776849  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.777195  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:36.276774  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.276844  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.277103  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:36.277142  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:36.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.777059  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.777387  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.276088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.276166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:38.776611  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:39.276263  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.276598  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:39.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.776292  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.776600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:40.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:41.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:41.776294  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.776370  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.776711  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.266330  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:42.277622  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.277694  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.277960  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.360709  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361696  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361795  480112 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:42.365007  480112 out.go:179] * Enabled addons: 
	I1205 06:41:42.368666  480112 addons.go:530] duration metric: took 1m37.062317768s for enable addons: enabled=[]
	I1205 06:41:42.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:43.276187  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.276263  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.276622  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:43.276733  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:43.776170  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.776430  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.776531  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.776876  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:45.276870  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.277032  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.277745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:45.277837  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:45.776642  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.776716  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.777050  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.276852  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.276927  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.277279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.777045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.777126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.777396  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.776088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.776166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.776536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:47.776604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:48.276249  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.276340  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.276655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:48.776130  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.776156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.776445  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:50.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.276229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:50.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:50.776285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.776369  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.776728  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.276429  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.276528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.276792  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.776116  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.276150  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.776085  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:52.776560  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:53.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.276685  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:53.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.776608  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.276706  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.776729  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.776832  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.777167  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:54.777215  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:55.276948  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.277018  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:55.776048  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.776114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.776379  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.776084  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:57.276044  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.276370  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:57.276409  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:57.776074  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.776175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.776534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.276474  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.776435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:59.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.276461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:59.276501  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:59.776413  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.776495  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.776828  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.276523  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.276611  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.276928  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.776709  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.776788  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.777104  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:01.276864  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.276950  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.277320  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:01.277377  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:01.776897  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.776970  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.777279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.276047  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.276127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.776476  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:03.277070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.277155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.277407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:03.277449  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:03.776150  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.776586  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.276303  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.276387  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.276681  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.776711  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.776794  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.782759  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:42:05.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.276619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:05.776366  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.776442  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.776784  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:05.776836  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:06.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:06.776160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.776573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.276332  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.276414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.276772  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.776264  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.776337  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.776591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:08.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.276230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:08.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:08.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.776787  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.276425  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.776266  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:10.276403  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.276479  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.276767  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:10.276814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:10.776451  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.776520  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.776795  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.276636  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.276714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.277054  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.776915  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.776994  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.777329  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.276040  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.276407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.776109  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:12.776597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:13.277062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.277174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.277498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:13.776187  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.776262  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.276253  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.276688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.776496  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.776570  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.776890  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:14.776948  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:15.276665  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.276733  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.277042  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:15.776898  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.777305  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.276026  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.276107  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.776492  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:17.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.276185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:17.276576  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:17.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.776357  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.776675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.276109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:19.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:19.276619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:19.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.776483  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.776740  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.776163  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.776556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.276156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.776270  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:21.776630  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:22.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:22.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.776267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.276270  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.276346  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.276675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.776093  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:24.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:24.276482  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:24.776113  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.776187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.776592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:26.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:26.276597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:26.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.776223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.776559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.276255  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.276327  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.776352  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.776694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.776441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:28.776496  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:29.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.276214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:29.776193  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.776294  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.776633  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.276354  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.276958  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.776754  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.776886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.777216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:30.777271  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:31.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.276997  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.277353  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:31.776905  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.777239  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.276031  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.276129  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:33.276005  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.276326  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:33.276364  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:33.776056  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.776130  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.276252  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.276601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.776170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.776439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:35.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.276502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:35.276548  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:35.776412  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.776485  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.776805  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.276468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:37.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:37.276578  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:37.776073  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.776140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.276191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.276447  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.776154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.776231  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:39.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:40.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.776168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.776483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.776311  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.776412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:41.776800  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:42.276460  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.276533  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.276835  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:42.776147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.276274  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.776371  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:44.276399  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.276475  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.276774  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:44.276818  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:44.776823  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.776896  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.777260  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.277015  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.277165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.277467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.276290  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.276372  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.276755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.776351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.776696  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:46.776865  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:47.276162  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.276562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:47.776400  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.776503  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.276644  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.276723  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.276978  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.776820  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.776899  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.777234  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:48.777287  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:49.277045  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.277135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.277475  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:49.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.776484  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.276116  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.276446  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.776127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.776478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:51.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:51.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.776200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.276504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.776090  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.776470  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.276544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:53.776655  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:54.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:54.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.776393  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.776463  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.776718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:55.776760  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:56.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:56.776279  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.776355  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.276419  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.776121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:58.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:58.276716  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:58.776027  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.776099  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.776349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.276501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.776817  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.776902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.777233  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:00.277352  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.277456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.277768  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:00.277814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:00.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.776654  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.776901  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.776971  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.777244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.277063  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.277162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.277496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:02.776546  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:03.276099  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:03.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.276644  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.776637  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.776900  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:04.776951  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:05.276709  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:05.776977  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.777064  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.777431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.776100  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.776494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:07.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:07.276607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:07.776246  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.776316  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.276194  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.276266  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.276528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.776304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.776699  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:09.776757  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:10.276472  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.276560  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.276905  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:10.776647  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.776717  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.776986  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.276873  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.277209  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.777023  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.777098  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.777457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:11.777510  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:12.276089  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.276172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:12.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.276276  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.276351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.276678  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.776139  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:14.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.276507  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:14.276551  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:14.776223  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.776621  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.276486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:16.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:16.276663  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:16.776324  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.776396  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.776758  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.276146  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.776575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.276431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:18.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:19.276304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.276385  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.276748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:19.776687  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.776760  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.777008  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.276923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.277244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.777028  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.777103  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.777448  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:20.777499  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:21.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:21.776177  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.776259  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.776596  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.276311  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.276394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.276742  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.776321  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.776716  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:23.276454  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.276541  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.276962  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:23.277021  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:23.776836  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.776923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.777277  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.277022  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.277402  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.776322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.276424  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.276715  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.776566  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.776639  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.776913  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:25.776962  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:26.276795  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.276868  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.277314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:26.776043  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.776120  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.276458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:28.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.276267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:28.276648  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:28.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.776457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.276201  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.776229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.776584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:30.776585  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:31.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.276684  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:31.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.776434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.776375  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.776708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:32.776765  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.276143  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.276404  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:33.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.276299  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.276386  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.276745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.776389  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:35.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:35.276620  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:35.776302  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.776730  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.276513  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.776244  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.776582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.776217  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:37.776588  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:38.276170  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.276253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.276591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:38.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.776702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.276158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.776295  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.776693  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:39.776750  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.276217  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.276537  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:40.776079  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.776268  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.776641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:42.276112  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.276194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:42.276522  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:42.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.776576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.276319  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.276422  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.276770  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.776459  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.776529  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.776862  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:44.276624  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.276703  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.277019  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:44.277073  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:44.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.776964  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.777314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.276046  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.276131  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.276394  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.776391  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.776465  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.276420  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.276883  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.776662  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.776730  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.776998  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:46.777043  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:47.276766  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.276837  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.277173  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:47.776961  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.777038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.777378  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.277033  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.277382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.776065  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.776137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.776471  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:49.276093  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:49.276562  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:49.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.276235  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.276637  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.276152  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.276423  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:51.776605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:52.276271  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.276672  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:52.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.776729  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.276218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.776158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.776239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:54.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:54.276664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:54.776326  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.776403  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.776723  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.276353  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.276436  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.276747  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.776688  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.776759  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.777015  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:56.276831  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.276902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.277216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:56.277268  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:56.776881  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.776955  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.777297  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.276004  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.276075  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.276309  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.776346  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.776677  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:58.776715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:59.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.276197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:59.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.776524  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.776869  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.276462  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.276555  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.276854  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.776547  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.776618  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.776940  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:00.776993  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:01.276491  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.276575  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.276927  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:01.776492  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.776564  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.776833  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.276163  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.276584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.776165  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:03.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:03.276467  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:03.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.276109  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.776096  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:05.276414  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.276498  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.276881  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:05.276923  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:05.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.776194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.276351  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.276427  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.276786  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.276368  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.276703  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:07.776509  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:08.276197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.276613  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:08.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.776428  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.776751  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.276440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.776287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.776623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:09.776680  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:10.276196  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.276274  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.276577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:10.776259  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.776330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.776668  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.276219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.776276  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.776353  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.776679  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:11.776729  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:12.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:12.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.776183  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.276219  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.276630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.776971  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.777044  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.777316  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:13.777359  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:14.276035  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.276110  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:14.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.776211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.776155  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.776242  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.776630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:16.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.276312  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:16.276712  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:16.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.776158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.276169  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.776291  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.776701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.276007  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.276319  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.776002  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.776084  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:18.776517  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:19.276181  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.276257  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:19.776049  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.776119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.776371  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:20.776581  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:21.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.276174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:21.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.776659  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.276365  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.276438  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.276776  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.776225  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.776811  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:22.776918  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:23.276742  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.276818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.277175  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:23.777069  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.777161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.777559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.276175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.776173  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:25.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.276345  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.276694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:25.276749  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:25.776415  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.776487  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.776789  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.276167  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.776128  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:27.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:28.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.776254  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.776579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.276542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.776485  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.776594  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.776923  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:29.776981  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:30.276085  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:30.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.776542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.276152  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.276609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:32.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:32.276593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:32.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.776213  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.776635  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:34.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:34.276654  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:34.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.776308  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.776391  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.776737  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.276100  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.276170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.276424  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.776558  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:36.776623  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:37.276145  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:37.776081  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.776159  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.276595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.776175  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.776258  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:38.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:39.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.276167  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:39.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.276186  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.276264  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.276597  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.776207  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.776284  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:41.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.276518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:41.276567  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:41.776214  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.776290  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.776631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.276210  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.276285  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:43.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:43.276715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:43.776156  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.776564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.276236  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.276330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.276658  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.776699  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.776770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.777048  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:45.276733  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.276817  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.277141  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:45.277194  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:45.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.776557  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.276336  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.276649  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.776440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.276545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.776314  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:47.776761  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:48.276091  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:48.776114  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.276135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.276211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.276541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.776067  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.776138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.776462  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:50.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:50.276626  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:50.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.776099  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.776010  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.776087  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.776341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:52.776389  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:53.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.276192  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:53.776263  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.776669  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.276617  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.776724  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:54.777107  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:55.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.276974  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.277307  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:55.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.776673  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.276245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.776700  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:57.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.276411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:57.276450  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:57.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.276629  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.776188  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:59.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:59.276596  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:59.776249  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.776665  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.276382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.276785  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.776579  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.776666  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.777193  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.776510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:01.776573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:02.276144  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:02.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.776588  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.276207  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.276642  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.776382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.776473  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.776816  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:03.776873  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:04.276616  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.276687  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.276947  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:04.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.776956  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.777296  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.276052  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.276135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.276493  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.776076  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.776382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:06.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:06.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:06.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.776318  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.776607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.276647  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.776505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:08.276124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.276525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:08.276582  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:08.776068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.776427  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.776450  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.776528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.776851  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:10.276606  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.276677  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.277000  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:10.277057  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:10.776657  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.776732  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.276811  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.276882  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.277223  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.776851  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.776931  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.777196  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:12.276964  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.277038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.277388  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:12.277445  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:12.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.776553  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.276228  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.276604  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.776145  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.776458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:14.776508  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:15.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.276200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.276794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:15.776653  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.776744  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.777091  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.276715  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.276782  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.277064  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.776929  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.777011  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.777376  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:16.777433  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:17.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.276186  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.276483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:17.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:19.276077  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.276409  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:19.276457  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:19.776140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.276265  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.276339  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.276676  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.776280  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.776606  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:21.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.276210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:21.276636  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:21.776310  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.776390  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.776682  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.276070  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.276144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:23.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.276321  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.276652  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:23.276714  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:23.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.776500  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.276572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.776624  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.776995  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:25.276720  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.276795  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.277096  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:25.277138  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:25.777018  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.777094  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.777486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.776075  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.776258  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.776680  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:27.776737  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:28.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.276623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:28.776331  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.776707  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.276416  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.276493  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.276818  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.776639  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.776714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.776980  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:29.777029  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:30.276781  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.276856  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.277201  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:30.776871  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.776952  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.777288  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.277017  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.277360  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.776747  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.776819  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.777132  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:31.777186  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:32.276950  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.277023  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.277345  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:32.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.776177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.776473  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.276576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.776686  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:34.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.276462  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.276731  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:34.276780  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:34.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.776596  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.776911  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.276784  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.276862  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.277181  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.776969  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.777301  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:36.277066  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.277146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.277501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:36.277569  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:36.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.776185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.276163  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:38.776483  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:39.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.276555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:39.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.776488  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.776826  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.276588  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.276663  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.276937  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.776794  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.776875  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.777212  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:40.777264  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:41.277035  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.277114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.277443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:41.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.776176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.276209  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.276666  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.776161  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.776562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:43.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:43.276647  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:43.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.276530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.776148  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:45.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.276708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:45.276763  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:45.776444  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.776519  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.776847  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.276602  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.276676  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.276921  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.776673  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.776790  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.777114  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:47.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:47.277302  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:47.776975  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.777051  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.777338  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.276041  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.276118  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.276410  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.776030  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.776109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.776395  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.276033  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.276104  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.276393  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:49.776593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:50.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:50.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.776209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:52.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:52.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:52.776182  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.776256  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.776572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.776498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:54.276193  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.276592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:54.276649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:54.776395  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.776470  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.276132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.276389  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.776213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:56.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.276656  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:56.276710  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:56.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.776281  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.776602  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.276123  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.276534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.776290  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.776381  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.776755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.276054  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.276133  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.276434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:58.776554  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:59.276223  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:59.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.276689  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.276784  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.277182  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.776974  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.777053  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.777397  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:00.777455  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:01.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.276181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:01.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.276247  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.276641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:03.276061  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.276138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:03.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:03.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.276265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.776433  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.776505  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.776849  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:05.276666  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.276770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:05.277147  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:05.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:06.276074  480112 node_ready.go:38] duration metric: took 6m0.000169865s for node "functional-787602" to be "Ready" ...
	I1205 06:46:06.279558  480112 out.go:203] 
	W1205 06:46:06.282535  480112 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:46:06.282557  480112 out.go:285] * 
	W1205 06:46:06.284719  480112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:46:06.287525  480112 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.581412173Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9aa7b449-1ae2-4384-87e4-65b9a24fbea7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.605978089Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b3194c87-82bc-4598-80c5-431dd94b79dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.606137656Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b3194c87-82bc-4598-80c5-431dd94b79dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.606189743Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b3194c87-82bc-4598-80c5-431dd94b79dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.704559867Z" level=info msg="Checking image status: minikube-local-cache-test:functional-787602" id=22be8394-443f-4475-bd1a-0099e58de926 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.730063797Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-787602" id=5e4e0158-f569-409e-beb1-58fd4e4941c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.730220697Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-787602 not found" id=5e4e0158-f569-409e-beb1-58fd4e4941c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.730273711Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-787602 found" id=5e4e0158-f569-409e-beb1-58fd4e4941c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.756848835Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-787602" id=075858db-0799-409d-b815-fb45ccb8b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.757020021Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-787602 not found" id=075858db-0799-409d-b815-fb45ccb8b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.75707301Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-787602 found" id=075858db-0799-409d-b815-fb45ccb8b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.562814589Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f3477146-fa11-4450-a819-6bd0968712a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.889931896Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=076f8c25-05eb-4184-9765-300a3483fdb8 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.89009445Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=076f8c25-05eb-4184-9765-300a3483fdb8 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.890143222Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=076f8c25-05eb-4184-9765-300a3483fdb8 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.43805252Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=29e1462a-2dc0-4511-b385-36e4e9ef2888 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.43817646Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=29e1462a-2dc0-4511-b385-36e4e9ef2888 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.438211234Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=29e1462a-2dc0-4511-b385-36e4e9ef2888 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.484857632Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1c72a6ec-6385-4908-883a-a6e1d899a39a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.484980916Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1c72a6ec-6385-4908-883a-a6e1d899a39a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.485024699Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1c72a6ec-6385-4908-883a-a6e1d899a39a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.513359165Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7509d3f5-17cd-4bf2-a0d3-20bc09ced34c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.513489103Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7509d3f5-17cd-4bf2-a0d3-20bc09ced34c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.513523286Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7509d3f5-17cd-4bf2-a0d3-20bc09ced34c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:18 functional-787602 crio[6033]: time="2025-12-05T06:46:18.056741075Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=91ed2cc6-448f-4a39-a38a-99accbdaa389 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:19.581524   10038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.582062   10038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.583515   10038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.584594   10038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:19.585272   10038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:46:19 up  3:28,  0 user,  load average: 0.53, 0.27, 0.49
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:46:17 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:17 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1146.
	Dec 05 06:46:17 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:17 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:17 functional-787602 kubelet[9912]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:17 functional-787602 kubelet[9912]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:17 functional-787602 kubelet[9912]: E1205 06:46:17.813132    9912 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:17 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:17 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:18 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1147.
	Dec 05 06:46:18 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:18 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:18 functional-787602 kubelet[9940]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:18 functional-787602 kubelet[9940]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:18 functional-787602 kubelet[9940]: E1205 06:46:18.595118    9940 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:18 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:18 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:19 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1148.
	Dec 05 06:46:19 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:19 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:19 functional-787602 kubelet[9971]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:19 functional-787602 kubelet[9971]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:19 functional-787602 kubelet[9971]: E1205 06:46:19.331729    9971 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:19 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:19 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (356.506218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-787602 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-787602 get pods: exit status 1 (105.598824ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-787602 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (319.114179ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-252233 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ ssh     │ functional-252233 ssh pgrep buildkitd                                                                                                             │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ image   │ functional-252233 image ls --format yaml --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr                                            │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format json --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format table --alsologtostderr                                                                                       │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls                                                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ delete  │ -p functional-252233                                                                                                                              │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ start   │ -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ start   │ -p functional-787602 --alsologtostderr -v=8                                                                                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:39 UTC │                     │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:latest                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add minikube-local-cache-test:functional-787602                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache delete minikube-local-cache-test:functional-787602                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl images                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ cache   │ functional-787602 cache reload                                                                                                                    │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ kubectl │ functional-787602 kubectl -- --context functional-787602 get pods                                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:39:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:39:59.523609  480112 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:39:59.523793  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.523816  480112 out.go:374] Setting ErrFile to fd 2...
	I1205 06:39:59.523837  480112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:39:59.524220  480112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:39:59.524681  480112 out.go:368] Setting JSON to false
	I1205 06:39:59.525943  480112 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12127,"bootTime":1764904673,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:39:59.526021  480112 start.go:143] virtualization:  
	I1205 06:39:59.529485  480112 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:39:59.533299  480112 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:39:59.533430  480112 notify.go:221] Checking for updates...
	I1205 06:39:59.539032  480112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:39:59.542038  480112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:39:59.544821  480112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:39:59.547558  480112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:39:59.550303  480112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:39:59.553653  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:39:59.553793  480112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:39:59.587101  480112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:39:59.587209  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.647016  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.637315829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.647121  480112 docker.go:319] overlay module found
	I1205 06:39:59.650323  480112 out.go:179] * Using the docker driver based on existing profile
	I1205 06:39:59.653400  480112 start.go:309] selected driver: docker
	I1205 06:39:59.653426  480112 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.653516  480112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:39:59.653622  480112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:39:59.713012  480112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:39:59.702941112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:39:59.713548  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:39:59.713621  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:39:59.713678  480112 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:39:59.716888  480112 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:39:59.719675  480112 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:39:59.722682  480112 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:39:59.725781  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:39:59.725946  480112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:39:59.745247  480112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:39:59.745269  480112 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:39:59.798316  480112 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:40:00.046313  480112 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:40:00.046504  480112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:40:00.046814  480112 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:40:00.046857  480112 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.046933  480112 start.go:364] duration metric: took 43.709µs to acquireMachinesLock for "functional-787602"
	I1205 06:40:00.046950  480112 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:40:00.046969  480112 fix.go:54] fixHost starting: 
	I1205 06:40:00.047287  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:00.049366  480112 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049539  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:40:00.049567  480112 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 224.085µs
	I1205 06:40:00.049597  480112 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:40:00.049636  480112 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.049702  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:40:00.049722  480112 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 89.733µs
	I1205 06:40:00.050277  480112 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:40:00.050353  480112 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051458  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:40:00.051500  480112 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.155091ms
	I1205 06:40:00.051529  480112 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051582  480112 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051659  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:40:00.051680  480112 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 114.34µs
	I1205 06:40:00.051702  480112 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:40:00.051741  480112 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.051791  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:40:00.051822  480112 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 83.054µs
	I1205 06:40:00.063751  480112 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064371  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:40:00.064388  480112 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 658.756µs
	I1205 06:40:00.064400  480112 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:40:00.064453  480112 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.064504  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:40:00.064510  480112 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 92.349µs
	I1205 06:40:00.064516  480112 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:40:00.064532  480112 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:40:00.067976  480112 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:40:00.068029  480112 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 3.495239ms
	I1205 06:40:00.068074  480112 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:40:00.058631  480112 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:40:00.068155  480112 cache.go:87] Successfully saved all images to host disk.
	I1205 06:40:00.156134  480112 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:40:00.156177  480112 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:40:00.160840  480112 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:40:00.160889  480112 machine.go:94] provisionDockerMachine start ...
	I1205 06:40:00.161003  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.232523  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.232876  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.232886  480112 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:40:00.484459  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.484485  480112 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:40:00.484571  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.540991  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.541328  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.541341  480112 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:40:00.761314  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:40:00.761404  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:00.782315  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:00.782666  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:00.782689  480112 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:40:00.934901  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:40:00.934930  480112 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:40:00.935005  480112 ubuntu.go:190] setting up certificates
	I1205 06:40:00.935016  480112 provision.go:84] configureAuth start
	I1205 06:40:00.935097  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:00.952439  480112 provision.go:143] copyHostCerts
	I1205 06:40:00.952486  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952527  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:40:00.952543  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:40:00.952619  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:40:00.952705  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952727  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:40:00.952737  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:40:00.952765  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:40:00.952809  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952828  480112 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:40:00.952837  480112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:40:00.952861  480112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:40:00.952911  480112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:40:01.160028  480112 provision.go:177] copyRemoteCerts
	I1205 06:40:01.160150  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:40:01.160201  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.184354  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.295740  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 06:40:01.295812  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:40:01.316925  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 06:40:01.316986  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:40:01.339507  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 06:40:01.339574  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:40:01.358710  480112 provision.go:87] duration metric: took 423.67042ms to configureAuth
	I1205 06:40:01.358788  480112 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:40:01.358981  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:01.359104  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.377010  480112 main.go:143] libmachine: Using SSH client type: native
	I1205 06:40:01.377340  480112 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:40:01.377360  480112 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:40:01.723262  480112 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:40:01.723303  480112 machine.go:97] duration metric: took 1.56238873s to provisionDockerMachine
	I1205 06:40:01.723316  480112 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:40:01.723329  480112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:40:01.723398  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:40:01.723446  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.742177  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:01.847102  480112 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:40:01.850854  480112 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1205 06:40:01.850880  480112 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1205 06:40:01.850885  480112 command_runner.go:130] > VERSION_ID="12"
	I1205 06:40:01.850889  480112 command_runner.go:130] > VERSION="12 (bookworm)"
	I1205 06:40:01.850897  480112 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1205 06:40:01.850901  480112 command_runner.go:130] > ID=debian
	I1205 06:40:01.850906  480112 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1205 06:40:01.850910  480112 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1205 06:40:01.850918  480112 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1205 06:40:01.850955  480112 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:40:01.850978  480112 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:40:01.850990  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:40:01.851049  480112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:40:01.851138  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:40:01.851149  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /etc/ssl/certs/4441472.pem
	I1205 06:40:01.851230  480112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:40:01.851237  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> /etc/test/nested/copy/444147/hosts
	I1205 06:40:01.851282  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:40:01.859516  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:01.879483  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:40:01.898655  480112 start.go:296] duration metric: took 175.324245ms for postStartSetup
	I1205 06:40:01.898744  480112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:40:01.898799  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:01.917838  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.020238  480112 command_runner.go:130] > 18%
	I1205 06:40:02.020354  480112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:40:02.025815  480112 command_runner.go:130] > 160G
	I1205 06:40:02.026493  480112 fix.go:56] duration metric: took 1.979519007s for fixHost
	I1205 06:40:02.026516  480112 start.go:83] releasing machines lock for "functional-787602", held for 1.979574696s
	I1205 06:40:02.026587  480112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:40:02.046979  480112 ssh_runner.go:195] Run: cat /version.json
	I1205 06:40:02.047030  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.047280  480112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:40:02.047345  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:02.081102  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.085747  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:02.189932  480112 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1205 06:40:02.190072  480112 ssh_runner.go:195] Run: systemctl --version
	I1205 06:40:02.280062  480112 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 06:40:02.282950  480112 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1205 06:40:02.282989  480112 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 06:40:02.283061  480112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:40:02.319896  480112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 06:40:02.324212  480112 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 06:40:02.324374  480112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:40:02.324444  480112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:40:02.332670  480112 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:40:02.332736  480112 start.go:496] detecting cgroup driver to use...
	I1205 06:40:02.332774  480112 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:40:02.332831  480112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:40:02.348502  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:40:02.361851  480112 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:40:02.361926  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:40:02.380602  480112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:40:02.393710  480112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:40:02.522109  480112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:40:02.655884  480112 docker.go:234] disabling docker service ...
	I1205 06:40:02.655958  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:40:02.673330  480112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:40:02.687649  480112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:40:02.802223  480112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:40:02.930343  480112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:40:02.944017  480112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:40:02.956898  480112 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 06:40:02.958122  480112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:40:02.958248  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.967567  480112 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:40:02.967712  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.976781  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.985897  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:02.994984  480112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:40:03.003975  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.013874  480112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.022919  480112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.032163  480112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:40:03.038816  480112 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 06:40:03.039990  480112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:40:03.049427  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.175291  480112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:40:03.341374  480112 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:40:03.341477  480112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:40:03.345425  480112 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 06:40:03.345448  480112 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 06:40:03.345464  480112 command_runner.go:130] > Device: 0,73	Inode: 1755        Links: 1
	I1205 06:40:03.345472  480112 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:03.345477  480112 command_runner.go:130] > Access: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345484  480112 command_runner.go:130] > Modify: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345489  480112 command_runner.go:130] > Change: 2025-12-05 06:40:03.287268628 +0000
	I1205 06:40:03.345493  480112 command_runner.go:130] >  Birth: -
	I1205 06:40:03.345525  480112 start.go:564] Will wait 60s for crictl version
	I1205 06:40:03.345579  480112 ssh_runner.go:195] Run: which crictl
	I1205 06:40:03.348931  480112 command_runner.go:130] > /usr/local/bin/crictl
	I1205 06:40:03.349401  480112 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:40:03.373825  480112 command_runner.go:130] > Version:  0.1.0
	I1205 06:40:03.373849  480112 command_runner.go:130] > RuntimeName:  cri-o
	I1205 06:40:03.373973  480112 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1205 06:40:03.374159  480112 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 06:40:03.376168  480112 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:40:03.376252  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.403613  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.403690  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.403710  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.403727  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.403756  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.403777  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.403795  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.403813  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.403844  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.403865  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.403879  480112 command_runner.go:130] >      static
	I1205 06:40:03.403895  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.403924  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.403945  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.403964  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.403979  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.404006  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.404027  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.404044  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.404059  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.406234  480112 ssh_runner.go:195] Run: crio --version
	I1205 06:40:03.432776  480112 command_runner.go:130] > crio version 1.34.2
	I1205 06:40:03.432811  480112 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1205 06:40:03.432836  480112 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1205 06:40:03.432843  480112 command_runner.go:130] >    GitTreeState:   dirty
	I1205 06:40:03.432849  480112 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1205 06:40:03.432862  480112 command_runner.go:130] >    GoVersion:      go1.24.6
	I1205 06:40:03.432872  480112 command_runner.go:130] >    Compiler:       gc
	I1205 06:40:03.432877  480112 command_runner.go:130] >    Platform:       linux/arm64
	I1205 06:40:03.432886  480112 command_runner.go:130] >    Linkmode:       static
	I1205 06:40:03.432908  480112 command_runner.go:130] >    BuildTags:
	I1205 06:40:03.432916  480112 command_runner.go:130] >      static
	I1205 06:40:03.432920  480112 command_runner.go:130] >      netgo
	I1205 06:40:03.432948  480112 command_runner.go:130] >      osusergo
	I1205 06:40:03.432956  480112 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1205 06:40:03.432959  480112 command_runner.go:130] >      seccomp
	I1205 06:40:03.432963  480112 command_runner.go:130] >      apparmor
	I1205 06:40:03.432970  480112 command_runner.go:130] >      selinux
	I1205 06:40:03.432998  480112 command_runner.go:130] >    LDFlags:          unknown
	I1205 06:40:03.433006  480112 command_runner.go:130] >    SeccompEnabled:   true
	I1205 06:40:03.433010  480112 command_runner.go:130] >    AppArmorEnabled:  false
	I1205 06:40:03.440242  480112 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:40:03.443151  480112 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:40:03.459691  480112 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:40:03.463610  480112 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1205 06:40:03.463748  480112 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:40:03.463853  480112 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:40:03.463910  480112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:40:03.497207  480112 command_runner.go:130] > {
	I1205 06:40:03.497226  480112 command_runner.go:130] >   "images":  [
	I1205 06:40:03.497231  480112 command_runner.go:130] >     {
	I1205 06:40:03.497239  480112 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1205 06:40:03.497244  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497250  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 06:40:03.497253  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497257  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497267  480112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1205 06:40:03.497271  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497276  480112 command_runner.go:130] >       "size":  "29035622",
	I1205 06:40:03.497279  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497283  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497286  480112 command_runner.go:130] >     },
	I1205 06:40:03.497290  480112 command_runner.go:130] >     {
	I1205 06:40:03.497297  480112 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1205 06:40:03.497301  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497306  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1205 06:40:03.497309  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497313  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497321  480112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1205 06:40:03.497324  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497328  480112 command_runner.go:130] >       "size":  "74488375",
	I1205 06:40:03.497332  480112 command_runner.go:130] >       "username":  "nonroot",
	I1205 06:40:03.497336  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497340  480112 command_runner.go:130] >     },
	I1205 06:40:03.497343  480112 command_runner.go:130] >     {
	I1205 06:40:03.497350  480112 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1205 06:40:03.497354  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497359  480112 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1205 06:40:03.497362  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497366  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497388  480112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1205 06:40:03.497393  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497397  480112 command_runner.go:130] >       "size":  "60854229",
	I1205 06:40:03.497401  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497405  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497409  480112 command_runner.go:130] >       },
	I1205 06:40:03.497413  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497417  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497421  480112 command_runner.go:130] >     },
	I1205 06:40:03.497424  480112 command_runner.go:130] >     {
	I1205 06:40:03.497430  480112 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1205 06:40:03.497434  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497439  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1205 06:40:03.497442  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497446  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497454  480112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1205 06:40:03.497459  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497463  480112 command_runner.go:130] >       "size":  "84947242",
	I1205 06:40:03.497466  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497469  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497473  480112 command_runner.go:130] >       },
	I1205 06:40:03.497476  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497480  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497483  480112 command_runner.go:130] >     },
	I1205 06:40:03.497486  480112 command_runner.go:130] >     {
	I1205 06:40:03.497492  480112 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1205 06:40:03.497496  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497501  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1205 06:40:03.497505  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497509  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497517  480112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1205 06:40:03.497520  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497529  480112 command_runner.go:130] >       "size":  "72167568",
	I1205 06:40:03.497539  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497542  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497545  480112 command_runner.go:130] >       },
	I1205 06:40:03.497549  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497552  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497555  480112 command_runner.go:130] >     },
	I1205 06:40:03.497558  480112 command_runner.go:130] >     {
	I1205 06:40:03.497564  480112 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1205 06:40:03.497568  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497573  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1205 06:40:03.497575  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497579  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497588  480112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1205 06:40:03.497592  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497595  480112 command_runner.go:130] >       "size":  "74105124",
	I1205 06:40:03.497599  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497603  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497606  480112 command_runner.go:130] >     },
	I1205 06:40:03.497609  480112 command_runner.go:130] >     {
	I1205 06:40:03.497615  480112 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1205 06:40:03.497618  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497624  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1205 06:40:03.497627  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497630  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497638  480112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1205 06:40:03.497641  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497645  480112 command_runner.go:130] >       "size":  "49819792",
	I1205 06:40:03.497648  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497652  480112 command_runner.go:130] >         "value":  "0"
	I1205 06:40:03.497655  480112 command_runner.go:130] >       },
	I1205 06:40:03.497659  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497663  480112 command_runner.go:130] >       "pinned":  false
	I1205 06:40:03.497666  480112 command_runner.go:130] >     },
	I1205 06:40:03.497672  480112 command_runner.go:130] >     {
	I1205 06:40:03.497679  480112 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1205 06:40:03.497683  480112 command_runner.go:130] >       "repoTags":  [
	I1205 06:40:03.497687  480112 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.497690  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497694  480112 command_runner.go:130] >       "repoDigests":  [
	I1205 06:40:03.497701  480112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1205 06:40:03.497705  480112 command_runner.go:130] >       ],
	I1205 06:40:03.497708  480112 command_runner.go:130] >       "size":  "517328",
	I1205 06:40:03.497712  480112 command_runner.go:130] >       "uid":  {
	I1205 06:40:03.497715  480112 command_runner.go:130] >         "value":  "65535"
	I1205 06:40:03.497718  480112 command_runner.go:130] >       },
	I1205 06:40:03.497722  480112 command_runner.go:130] >       "username":  "",
	I1205 06:40:03.497726  480112 command_runner.go:130] >       "pinned":  true
	I1205 06:40:03.497729  480112 command_runner.go:130] >     }
	I1205 06:40:03.497732  480112 command_runner.go:130] >   ]
	I1205 06:40:03.497735  480112 command_runner.go:130] > }
	I1205 06:40:03.499390  480112 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:40:03.499408  480112 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:40:03.499417  480112 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:40:03.499515  480112 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:40:03.499587  480112 ssh_runner.go:195] Run: crio config
	I1205 06:40:03.548638  480112 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 06:40:03.548661  480112 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 06:40:03.548669  480112 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 06:40:03.548671  480112 command_runner.go:130] > #
	I1205 06:40:03.548686  480112 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 06:40:03.548693  480112 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 06:40:03.548700  480112 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 06:40:03.548716  480112 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 06:40:03.548720  480112 command_runner.go:130] > # reload'.
	I1205 06:40:03.548726  480112 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 06:40:03.548733  480112 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 06:40:03.548739  480112 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 06:40:03.548745  480112 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 06:40:03.548748  480112 command_runner.go:130] > [crio]
	I1205 06:40:03.548755  480112 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 06:40:03.548760  480112 command_runner.go:130] > # containers images, in this directory.
	I1205 06:40:03.549179  480112 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 06:40:03.549226  480112 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 06:40:03.549246  480112 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1205 06:40:03.549268  480112 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 06:40:03.549287  480112 command_runner.go:130] > # imagestore = ""
	I1205 06:40:03.549306  480112 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 06:40:03.549324  480112 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 06:40:03.549341  480112 command_runner.go:130] > # storage_driver = "overlay"
	I1205 06:40:03.549356  480112 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 06:40:03.549385  480112 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 06:40:03.549402  480112 command_runner.go:130] > # storage_option = [
	I1205 06:40:03.549417  480112 command_runner.go:130] > # ]
	I1205 06:40:03.549435  480112 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 06:40:03.549461  480112 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 06:40:03.549487  480112 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 06:40:03.549504  480112 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 06:40:03.549521  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 06:40:03.549545  480112 command_runner.go:130] > # always happen on a node reboot
	I1205 06:40:03.549737  480112 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 06:40:03.549768  480112 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 06:40:03.549775  480112 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 06:40:03.549781  480112 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 06:40:03.549785  480112 command_runner.go:130] > # version_file_persist = ""
	I1205 06:40:03.549793  480112 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 06:40:03.549801  480112 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 06:40:03.549805  480112 command_runner.go:130] > # internal_wipe = true
	I1205 06:40:03.549813  480112 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 06:40:03.549818  480112 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 06:40:03.549822  480112 command_runner.go:130] > # internal_repair = true
	I1205 06:40:03.549828  480112 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 06:40:03.549834  480112 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 06:40:03.549840  480112 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 06:40:03.549845  480112 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 06:40:03.549854  480112 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 06:40:03.549858  480112 command_runner.go:130] > [crio.api]
	I1205 06:40:03.549863  480112 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 06:40:03.549867  480112 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 06:40:03.549872  480112 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 06:40:03.549876  480112 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 06:40:03.549883  480112 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 06:40:03.549889  480112 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 06:40:03.549892  480112 command_runner.go:130] > # stream_port = "0"
	I1205 06:40:03.549897  480112 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 06:40:03.549901  480112 command_runner.go:130] > # stream_enable_tls = false
	I1205 06:40:03.549907  480112 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 06:40:03.549911  480112 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 06:40:03.549917  480112 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 06:40:03.549923  480112 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549927  480112 command_runner.go:130] > # stream_tls_cert = ""
	I1205 06:40:03.549933  480112 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 06:40:03.549939  480112 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1205 06:40:03.549942  480112 command_runner.go:130] > # stream_tls_key = ""
	I1205 06:40:03.549948  480112 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 06:40:03.549954  480112 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 06:40:03.549958  480112 command_runner.go:130] > # automatically pick up the changes.
	I1205 06:40:03.549962  480112 command_runner.go:130] > # stream_tls_ca = ""
	I1205 06:40:03.549979  480112 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549984  480112 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 06:40:03.549991  480112 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 06:40:03.549996  480112 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 06:40:03.550002  480112 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 06:40:03.550007  480112 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 06:40:03.550010  480112 command_runner.go:130] > [crio.runtime]
	I1205 06:40:03.550016  480112 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 06:40:03.550021  480112 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 06:40:03.550025  480112 command_runner.go:130] > # "nofile=1024:2048"
	I1205 06:40:03.550034  480112 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 06:40:03.550038  480112 command_runner.go:130] > # default_ulimits = [
	I1205 06:40:03.550041  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550047  480112 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 06:40:03.550050  480112 command_runner.go:130] > # no_pivot = false
	I1205 06:40:03.550056  480112 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 06:40:03.550062  480112 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 06:40:03.550067  480112 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 06:40:03.550072  480112 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 06:40:03.550077  480112 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 06:40:03.550084  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550087  480112 command_runner.go:130] > # conmon = ""
	I1205 06:40:03.550092  480112 command_runner.go:130] > # Cgroup setting for conmon
	I1205 06:40:03.550099  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 06:40:03.550102  480112 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 06:40:03.550108  480112 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 06:40:03.550115  480112 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 06:40:03.550124  480112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 06:40:03.550128  480112 command_runner.go:130] > # conmon_env = [
	I1205 06:40:03.550130  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550136  480112 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 06:40:03.550141  480112 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 06:40:03.550146  480112 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 06:40:03.550150  480112 command_runner.go:130] > # default_env = [
	I1205 06:40:03.550152  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550158  480112 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 06:40:03.550165  480112 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 06:40:03.550169  480112 command_runner.go:130] > # selinux = false
	I1205 06:40:03.550180  480112 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 06:40:03.550188  480112 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1205 06:40:03.550193  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550197  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.550202  480112 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1205 06:40:03.550212  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550216  480112 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1205 06:40:03.550223  480112 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 06:40:03.550229  480112 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 06:40:03.550235  480112 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 06:40:03.550241  480112 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 06:40:03.550246  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550250  480112 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 06:40:03.550255  480112 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 06:40:03.550259  480112 command_runner.go:130] > # the cgroup blockio controller.
	I1205 06:40:03.550263  480112 command_runner.go:130] > # blockio_config_file = ""
	I1205 06:40:03.550269  480112 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 06:40:03.550273  480112 command_runner.go:130] > # blockio parameters.
	I1205 06:40:03.550277  480112 command_runner.go:130] > # blockio_reload = false
	I1205 06:40:03.550284  480112 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 06:40:03.550287  480112 command_runner.go:130] > # irqbalance daemon.
	I1205 06:40:03.550292  480112 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 06:40:03.550298  480112 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 06:40:03.550305  480112 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 06:40:03.550313  480112 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 06:40:03.550319  480112 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 06:40:03.550325  480112 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 06:40:03.550330  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.550333  480112 command_runner.go:130] > # rdt_config_file = ""
	I1205 06:40:03.550338  480112 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 06:40:03.550342  480112 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 06:40:03.550348  480112 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 06:40:03.550711  480112 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 06:40:03.550724  480112 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 06:40:03.550731  480112 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 06:40:03.550734  480112 command_runner.go:130] > # will be added.
	I1205 06:40:03.550738  480112 command_runner.go:130] > # default_capabilities = [
	I1205 06:40:03.550742  480112 command_runner.go:130] > # 	"CHOWN",
	I1205 06:40:03.550746  480112 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 06:40:03.550749  480112 command_runner.go:130] > # 	"FSETID",
	I1205 06:40:03.550752  480112 command_runner.go:130] > # 	"FOWNER",
	I1205 06:40:03.550756  480112 command_runner.go:130] > # 	"SETGID",
	I1205 06:40:03.550759  480112 command_runner.go:130] > # 	"SETUID",
	I1205 06:40:03.550782  480112 command_runner.go:130] > # 	"SETPCAP",
	I1205 06:40:03.550786  480112 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 06:40:03.550789  480112 command_runner.go:130] > # 	"KILL",
	I1205 06:40:03.550792  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550800  480112 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 06:40:03.550810  480112 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 06:40:03.550815  480112 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 06:40:03.550821  480112 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 06:40:03.550827  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550831  480112 command_runner.go:130] > default_sysctls = [
	I1205 06:40:03.550835  480112 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 06:40:03.550838  480112 command_runner.go:130] > ]
	I1205 06:40:03.550842  480112 command_runner.go:130] > # List of devices on the host that a
	I1205 06:40:03.550849  480112 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 06:40:03.550852  480112 command_runner.go:130] > # allowed_devices = [
	I1205 06:40:03.550856  480112 command_runner.go:130] > # 	"/dev/fuse",
	I1205 06:40:03.550859  480112 command_runner.go:130] > # 	"/dev/net/tun",
	I1205 06:40:03.550863  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550867  480112 command_runner.go:130] > # List of additional devices. specified as
	I1205 06:40:03.550875  480112 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 06:40:03.550880  480112 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 06:40:03.550886  480112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 06:40:03.550889  480112 command_runner.go:130] > # additional_devices = [
	I1205 06:40:03.550894  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550899  480112 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 06:40:03.550905  480112 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 06:40:03.550909  480112 command_runner.go:130] > # 	"/etc/cdi",
	I1205 06:40:03.550912  480112 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 06:40:03.550915  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550921  480112 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 06:40:03.550927  480112 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 06:40:03.550931  480112 command_runner.go:130] > # Defaults to false.
	I1205 06:40:03.550936  480112 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 06:40:03.550942  480112 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 06:40:03.550949  480112 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 06:40:03.550952  480112 command_runner.go:130] > # hooks_dir = [
	I1205 06:40:03.550956  480112 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 06:40:03.550962  480112 command_runner.go:130] > # ]
	I1205 06:40:03.550972  480112 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 06:40:03.550979  480112 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 06:40:03.550984  480112 command_runner.go:130] > # its default mounts from the following two files:
	I1205 06:40:03.550987  480112 command_runner.go:130] > #
	I1205 06:40:03.550993  480112 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 06:40:03.550999  480112 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 06:40:03.551004  480112 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 06:40:03.551007  480112 command_runner.go:130] > #
	I1205 06:40:03.551013  480112 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 06:40:03.551019  480112 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 06:40:03.551025  480112 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 06:40:03.551030  480112 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 06:40:03.551032  480112 command_runner.go:130] > #
	I1205 06:40:03.551036  480112 command_runner.go:130] > # default_mounts_file = ""
	I1205 06:40:03.551041  480112 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 06:40:03.551047  480112 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 06:40:03.551051  480112 command_runner.go:130] > # pids_limit = -1
	I1205 06:40:03.551057  480112 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 06:40:03.551063  480112 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 06:40:03.551069  480112 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 06:40:03.551077  480112 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 06:40:03.551080  480112 command_runner.go:130] > # log_size_max = -1
	I1205 06:40:03.551087  480112 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 06:40:03.551091  480112 command_runner.go:130] > # log_to_journald = false
	I1205 06:40:03.551098  480112 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 06:40:03.551103  480112 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 06:40:03.551108  480112 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 06:40:03.551113  480112 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 06:40:03.551118  480112 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 06:40:03.551121  480112 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 06:40:03.551127  480112 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 06:40:03.551131  480112 command_runner.go:130] > # read_only = false
	I1205 06:40:03.551137  480112 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 06:40:03.551147  480112 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 06:40:03.551151  480112 command_runner.go:130] > # live configuration reload.
	I1205 06:40:03.551154  480112 command_runner.go:130] > # log_level = "info"
	I1205 06:40:03.551160  480112 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 06:40:03.551164  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.551168  480112 command_runner.go:130] > # log_filter = ""
	I1205 06:40:03.551174  480112 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551180  480112 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 06:40:03.551184  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551192  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551196  480112 command_runner.go:130] > # uid_mappings = ""
	I1205 06:40:03.551201  480112 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 06:40:03.551208  480112 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 06:40:03.551212  480112 command_runner.go:130] > # separated by comma.
	I1205 06:40:03.551219  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551223  480112 command_runner.go:130] > # gid_mappings = ""
	I1205 06:40:03.551229  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 06:40:03.551235  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551241  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551249  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551253  480112 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 06:40:03.551259  480112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 06:40:03.551264  480112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 06:40:03.551271  480112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 06:40:03.551278  480112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 06:40:03.551282  480112 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 06:40:03.551288  480112 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 06:40:03.551296  480112 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 06:40:03.551302  480112 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 06:40:03.551306  480112 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 06:40:03.551311  480112 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 06:40:03.551317  480112 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 06:40:03.551322  480112 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 06:40:03.551330  480112 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 06:40:03.551333  480112 command_runner.go:130] > # drop_infra_ctr = true
	I1205 06:40:03.551340  480112 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 06:40:03.551346  480112 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 06:40:03.551353  480112 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 06:40:03.551357  480112 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 06:40:03.551364  480112 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 06:40:03.551370  480112 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 06:40:03.551375  480112 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 06:40:03.551380  480112 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 06:40:03.551384  480112 command_runner.go:130] > # shared_cpuset = ""
	I1205 06:40:03.551390  480112 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 06:40:03.551395  480112 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 06:40:03.551398  480112 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 06:40:03.551405  480112 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 06:40:03.551408  480112 command_runner.go:130] > # pinns_path = ""
	I1205 06:40:03.551414  480112 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 06:40:03.551420  480112 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 06:40:03.551424  480112 command_runner.go:130] > # enable_criu_support = true
	I1205 06:40:03.551428  480112 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 06:40:03.551434  480112 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 06:40:03.551438  480112 command_runner.go:130] > # enable_pod_events = false
	I1205 06:40:03.551444  480112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 06:40:03.551449  480112 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 06:40:03.551453  480112 command_runner.go:130] > # default_runtime = "crun"
	I1205 06:40:03.551458  480112 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 06:40:03.551466  480112 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 06:40:03.551475  480112 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 06:40:03.551480  480112 command_runner.go:130] > # creation as a file is not desired either.
	I1205 06:40:03.551488  480112 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 06:40:03.551495  480112 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 06:40:03.551499  480112 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 06:40:03.551502  480112 command_runner.go:130] > # ]
	I1205 06:40:03.551511  480112 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 06:40:03.551518  480112 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 06:40:03.551524  480112 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 06:40:03.551528  480112 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 06:40:03.551532  480112 command_runner.go:130] > #
	I1205 06:40:03.551536  480112 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 06:40:03.551541  480112 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 06:40:03.551544  480112 command_runner.go:130] > # runtime_type = "oci"
	I1205 06:40:03.551549  480112 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 06:40:03.551553  480112 command_runner.go:130] > # inherit_default_runtime = false
	I1205 06:40:03.551558  480112 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 06:40:03.551562  480112 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 06:40:03.551566  480112 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 06:40:03.551570  480112 command_runner.go:130] > # monitor_env = []
	I1205 06:40:03.551574  480112 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 06:40:03.551578  480112 command_runner.go:130] > # allowed_annotations = []
	I1205 06:40:03.551583  480112 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 06:40:03.551587  480112 command_runner.go:130] > # no_sync_log = false
	I1205 06:40:03.551590  480112 command_runner.go:130] > # default_annotations = {}
	I1205 06:40:03.551594  480112 command_runner.go:130] > # stream_websockets = false
	I1205 06:40:03.551598  480112 command_runner.go:130] > # seccomp_profile = ""
	I1205 06:40:03.551631  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.551636  480112 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 06:40:03.551643  480112 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 06:40:03.551649  480112 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 06:40:03.551656  480112 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 06:40:03.551659  480112 command_runner.go:130] > #   in $PATH.
	I1205 06:40:03.551665  480112 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 06:40:03.551669  480112 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 06:40:03.551675  480112 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 06:40:03.551678  480112 command_runner.go:130] > #   state.
	I1205 06:40:03.551685  480112 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 06:40:03.551690  480112 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 06:40:03.551699  480112 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1205 06:40:03.551706  480112 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1205 06:40:03.551711  480112 command_runner.go:130] > #   the values from the default runtime on load time.
	I1205 06:40:03.551717  480112 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 06:40:03.551723  480112 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 06:40:03.551730  480112 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 06:40:03.551736  480112 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 06:40:03.551740  480112 command_runner.go:130] > #   The currently recognized values are:
	I1205 06:40:03.551747  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 06:40:03.551754  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 06:40:03.551761  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 06:40:03.551767  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 06:40:03.551774  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 06:40:03.551781  480112 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 06:40:03.551788  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 06:40:03.551794  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 06:40:03.551800  480112 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 06:40:03.551807  480112 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1205 06:40:03.551813  480112 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1205 06:40:03.551819  480112 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1205 06:40:03.551828  480112 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1205 06:40:03.551834  480112 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1205 06:40:03.551840  480112 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1205 06:40:03.551848  480112 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1205 06:40:03.551854  480112 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 06:40:03.551858  480112 command_runner.go:130] > #   deprecated option "conmon".
	I1205 06:40:03.551865  480112 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 06:40:03.551870  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 06:40:03.551877  480112 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 06:40:03.551882  480112 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 06:40:03.551888  480112 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1205 06:40:03.551893  480112 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 06:40:03.551900  480112 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1205 06:40:03.551907  480112 command_runner.go:130] > #   conmon-rs by using:
	I1205 06:40:03.551915  480112 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1205 06:40:03.551924  480112 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1205 06:40:03.551931  480112 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1205 06:40:03.551937  480112 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 06:40:03.551943  480112 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 06:40:03.551950  480112 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1205 06:40:03.551958  480112 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1205 06:40:03.551964  480112 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1205 06:40:03.551971  480112 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1205 06:40:03.551979  480112 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1205 06:40:03.551983  480112 command_runner.go:130] > #   when a machine crash happens.
	I1205 06:40:03.551990  480112 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1205 06:40:03.551997  480112 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1205 06:40:03.552005  480112 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1205 06:40:03.552009  480112 command_runner.go:130] > #   seccomp profile for the runtime.
	I1205 06:40:03.552015  480112 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1205 06:40:03.552022  480112 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1205 06:40:03.552025  480112 command_runner.go:130] > #
	I1205 06:40:03.552029  480112 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 06:40:03.552032  480112 command_runner.go:130] > #
	I1205 06:40:03.552038  480112 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 06:40:03.552044  480112 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 06:40:03.552046  480112 command_runner.go:130] > #
	I1205 06:40:03.552053  480112 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 06:40:03.552058  480112 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 06:40:03.552061  480112 command_runner.go:130] > #
	I1205 06:40:03.552067  480112 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 06:40:03.552070  480112 command_runner.go:130] > # feature.
	I1205 06:40:03.552072  480112 command_runner.go:130] > #
	I1205 06:40:03.552078  480112 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 06:40:03.552085  480112 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 06:40:03.552090  480112 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 06:40:03.552104  480112 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 06:40:03.552111  480112 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 06:40:03.552114  480112 command_runner.go:130] > #
	I1205 06:40:03.552121  480112 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 06:40:03.552127  480112 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 06:40:03.552129  480112 command_runner.go:130] > #
	I1205 06:40:03.552135  480112 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 06:40:03.552141  480112 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 06:40:03.552144  480112 command_runner.go:130] > #
	I1205 06:40:03.552150  480112 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 06:40:03.552156  480112 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 06:40:03.552159  480112 command_runner.go:130] > # limitation.
	I1205 06:40:03.552163  480112 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1205 06:40:03.552167  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1205 06:40:03.552170  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552174  480112 command_runner.go:130] > runtime_root = "/run/crun"
	I1205 06:40:03.552178  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552182  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552188  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552193  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552197  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552200  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552204  480112 command_runner.go:130] > allowed_annotations = [
	I1205 06:40:03.552208  480112 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1205 06:40:03.552211  480112 command_runner.go:130] > ]
	I1205 06:40:03.552215  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552219  480112 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 06:40:03.552223  480112 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1205 06:40:03.552226  480112 command_runner.go:130] > runtime_type = ""
	I1205 06:40:03.552230  480112 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 06:40:03.552234  480112 command_runner.go:130] > inherit_default_runtime = false
	I1205 06:40:03.552237  480112 command_runner.go:130] > runtime_config_path = ""
	I1205 06:40:03.552241  480112 command_runner.go:130] > container_min_memory = ""
	I1205 06:40:03.552248  480112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 06:40:03.552252  480112 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 06:40:03.552256  480112 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 06:40:03.552260  480112 command_runner.go:130] > privileged_without_host_devices = false
	I1205 06:40:03.552267  480112 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 06:40:03.552272  480112 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 06:40:03.552278  480112 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 06:40:03.552286  480112 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 06:40:03.552300  480112 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1205 06:40:03.552310  480112 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1205 06:40:03.552319  480112 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1205 06:40:03.552324  480112 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 06:40:03.552334  480112 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 06:40:03.552342  480112 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 06:40:03.552349  480112 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 06:40:03.552356  480112 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 06:40:03.552359  480112 command_runner.go:130] > # Example:
	I1205 06:40:03.552364  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 06:40:03.552368  480112 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 06:40:03.552373  480112 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 06:40:03.552382  480112 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 06:40:03.552385  480112 command_runner.go:130] > # cpuset = "0-1"
	I1205 06:40:03.552389  480112 command_runner.go:130] > # cpushares = "5"
	I1205 06:40:03.552392  480112 command_runner.go:130] > # cpuquota = "1000"
	I1205 06:40:03.552396  480112 command_runner.go:130] > # cpuperiod = "100000"
	I1205 06:40:03.552399  480112 command_runner.go:130] > # cpulimit = "35"
	I1205 06:40:03.552402  480112 command_runner.go:130] > # Where:
	I1205 06:40:03.552406  480112 command_runner.go:130] > # The workload name is workload-type.
	I1205 06:40:03.552413  480112 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 06:40:03.552419  480112 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 06:40:03.552424  480112 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 06:40:03.552432  480112 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 06:40:03.552438  480112 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 06:40:03.552445  480112 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 06:40:03.552452  480112 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 06:40:03.552456  480112 command_runner.go:130] > # Default value is set to true
	I1205 06:40:03.552461  480112 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 06:40:03.552466  480112 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 06:40:03.552471  480112 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 06:40:03.552475  480112 command_runner.go:130] > # Default value is set to 'false'
	I1205 06:40:03.552479  480112 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 06:40:03.552484  480112 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1205 06:40:03.552492  480112 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1205 06:40:03.552495  480112 command_runner.go:130] > # timezone = ""
	I1205 06:40:03.552502  480112 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 06:40:03.552504  480112 command_runner.go:130] > #
	I1205 06:40:03.552510  480112 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 06:40:03.552517  480112 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1205 06:40:03.552520  480112 command_runner.go:130] > [crio.image]
	I1205 06:40:03.552526  480112 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 06:40:03.552530  480112 command_runner.go:130] > # default_transport = "docker://"
	I1205 06:40:03.552536  480112 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 06:40:03.552543  480112 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552547  480112 command_runner.go:130] > # global_auth_file = ""
	I1205 06:40:03.552552  480112 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 06:40:03.552557  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552561  480112 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1205 06:40:03.552568  480112 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 06:40:03.552574  480112 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 06:40:03.552581  480112 command_runner.go:130] > # This option supports live configuration reload.
	I1205 06:40:03.552585  480112 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 06:40:03.552591  480112 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 06:40:03.552597  480112 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 06:40:03.552603  480112 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 06:40:03.552608  480112 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 06:40:03.552612  480112 command_runner.go:130] > # pause_command = "/pause"
	I1205 06:40:03.552622  480112 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 06:40:03.552628  480112 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 06:40:03.552641  480112 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 06:40:03.552646  480112 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 06:40:03.552652  480112 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 06:40:03.552658  480112 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 06:40:03.552661  480112 command_runner.go:130] > # pinned_images = [
	I1205 06:40:03.552664  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552670  480112 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 06:40:03.552675  480112 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 06:40:03.552681  480112 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 06:40:03.552687  480112 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 06:40:03.552692  480112 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 06:40:03.552697  480112 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1205 06:40:03.552702  480112 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 06:40:03.552708  480112 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 06:40:03.552716  480112 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 06:40:03.552722  480112 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 06:40:03.552728  480112 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 06:40:03.552733  480112 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 06:40:03.552738  480112 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 06:40:03.552746  480112 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 06:40:03.552749  480112 command_runner.go:130] > # changing them here.
	I1205 06:40:03.552755  480112 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1205 06:40:03.552758  480112 command_runner.go:130] > # insecure_registries = [
	I1205 06:40:03.552761  480112 command_runner.go:130] > # ]
	I1205 06:40:03.552767  480112 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 06:40:03.552772  480112 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 06:40:03.552776  480112 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 06:40:03.552780  480112 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 06:40:03.553031  480112 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 06:40:03.553083  480112 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1205 06:40:03.553106  480112 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1205 06:40:03.553125  480112 command_runner.go:130] > # auto_reload_registries = false
	I1205 06:40:03.553145  480112 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1205 06:40:03.553166  480112 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1205 06:40:03.553207  480112 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1205 06:40:03.553227  480112 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1205 06:40:03.553245  480112 command_runner.go:130] > # The mode of short name resolution.
	I1205 06:40:03.553268  480112 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1205 06:40:03.553288  480112 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1205 06:40:03.553305  480112 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1205 06:40:03.553320  480112 command_runner.go:130] > # short_name_mode = "enforcing"
	I1205 06:40:03.553338  480112 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1205 06:40:03.553365  480112 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1205 06:40:03.553538  480112 command_runner.go:130] > # oci_artifact_mount_support = true
	I1205 06:40:03.553551  480112 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 06:40:03.553555  480112 command_runner.go:130] > # CNI plugins.
	I1205 06:40:03.553559  480112 command_runner.go:130] > [crio.network]
	I1205 06:40:03.553564  480112 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 06:40:03.553570  480112 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 06:40:03.553574  480112 command_runner.go:130] > # cni_default_network = ""
	I1205 06:40:03.553580  480112 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 06:40:03.553587  480112 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 06:40:03.553592  480112 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 06:40:03.553597  480112 command_runner.go:130] > # plugin_dirs = [
	I1205 06:40:03.553600  480112 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 06:40:03.553603  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553607  480112 command_runner.go:130] > # List of included pod metrics.
	I1205 06:40:03.553616  480112 command_runner.go:130] > # included_pod_metrics = [
	I1205 06:40:03.553620  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553625  480112 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 06:40:03.553628  480112 command_runner.go:130] > [crio.metrics]
	I1205 06:40:03.553634  480112 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 06:40:03.553637  480112 command_runner.go:130] > # enable_metrics = false
	I1205 06:40:03.553641  480112 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 06:40:03.553646  480112 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 06:40:03.553655  480112 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 06:40:03.553661  480112 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 06:40:03.553670  480112 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 06:40:03.553675  480112 command_runner.go:130] > # metrics_collectors = [
	I1205 06:40:03.553679  480112 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 06:40:03.553683  480112 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 06:40:03.553687  480112 command_runner.go:130] > # 	"containers_oom_total",
	I1205 06:40:03.553691  480112 command_runner.go:130] > # 	"processes_defunct",
	I1205 06:40:03.553695  480112 command_runner.go:130] > # 	"operations_total",
	I1205 06:40:03.553699  480112 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 06:40:03.553703  480112 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 06:40:03.553707  480112 command_runner.go:130] > # 	"operations_errors_total",
	I1205 06:40:03.553711  480112 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 06:40:03.553715  480112 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 06:40:03.553719  480112 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 06:40:03.553723  480112 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 06:40:03.553727  480112 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 06:40:03.553731  480112 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 06:40:03.553736  480112 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 06:40:03.553740  480112 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 06:40:03.553744  480112 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1205 06:40:03.553747  480112 command_runner.go:130] > # ]
	I1205 06:40:03.553753  480112 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1205 06:40:03.553758  480112 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1205 06:40:03.553763  480112 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 06:40:03.553767  480112 command_runner.go:130] > # metrics_port = 9090
	I1205 06:40:03.553772  480112 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 06:40:03.553775  480112 command_runner.go:130] > # metrics_socket = ""
	I1205 06:40:03.553780  480112 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 06:40:03.553786  480112 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 06:40:03.553792  480112 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 06:40:03.553798  480112 command_runner.go:130] > # certificate on any modification event.
	I1205 06:40:03.553802  480112 command_runner.go:130] > # metrics_cert = ""
	I1205 06:40:03.553807  480112 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 06:40:03.553812  480112 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 06:40:03.553822  480112 command_runner.go:130] > # metrics_key = ""
	I1205 06:40:03.553828  480112 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 06:40:03.553831  480112 command_runner.go:130] > [crio.tracing]
	I1205 06:40:03.553836  480112 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 06:40:03.553841  480112 command_runner.go:130] > # enable_tracing = false
	I1205 06:40:03.553846  480112 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 06:40:03.553850  480112 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1205 06:40:03.553857  480112 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 06:40:03.553861  480112 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 06:40:03.553865  480112 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 06:40:03.553868  480112 command_runner.go:130] > [crio.nri]
	I1205 06:40:03.553872  480112 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 06:40:03.553876  480112 command_runner.go:130] > # enable_nri = true
	I1205 06:40:03.553880  480112 command_runner.go:130] > # NRI socket to listen on.
	I1205 06:40:03.553884  480112 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 06:40:03.553888  480112 command_runner.go:130] > # NRI plugin directory to use.
	I1205 06:40:03.553893  480112 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 06:40:03.553898  480112 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 06:40:03.553902  480112 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 06:40:03.553908  480112 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 06:40:03.553979  480112 command_runner.go:130] > # nri_disable_connections = false
	I1205 06:40:03.553985  480112 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 06:40:03.553990  480112 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 06:40:03.553995  480112 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 06:40:03.554000  480112 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 06:40:03.554004  480112 command_runner.go:130] > # NRI default validator configuration.
	I1205 06:40:03.554011  480112 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1205 06:40:03.554017  480112 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1205 06:40:03.554021  480112 command_runner.go:130] > # can be restricted/rejected:
	I1205 06:40:03.554025  480112 command_runner.go:130] > # - OCI hook injection
	I1205 06:40:03.554030  480112 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1205 06:40:03.554035  480112 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1205 06:40:03.554039  480112 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1205 06:40:03.554047  480112 command_runner.go:130] > # - adjustment of linux namespaces
	I1205 06:40:03.554054  480112 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1205 06:40:03.554060  480112 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1205 06:40:03.554066  480112 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1205 06:40:03.554070  480112 command_runner.go:130] > #
	I1205 06:40:03.554075  480112 command_runner.go:130] > # [crio.nri.default_validator]
	I1205 06:40:03.554079  480112 command_runner.go:130] > # nri_enable_default_validator = false
	I1205 06:40:03.554084  480112 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1205 06:40:03.554090  480112 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1205 06:40:03.554095  480112 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1205 06:40:03.554101  480112 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1205 06:40:03.554106  480112 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1205 06:40:03.554110  480112 command_runner.go:130] > # nri_validator_required_plugins = [
	I1205 06:40:03.554113  480112 command_runner.go:130] > # ]
	I1205 06:40:03.554118  480112 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1205 06:40:03.554124  480112 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 06:40:03.554127  480112 command_runner.go:130] > [crio.stats]
	I1205 06:40:03.554133  480112 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 06:40:03.554138  480112 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 06:40:03.554142  480112 command_runner.go:130] > # stats_collection_period = 0
	I1205 06:40:03.554148  480112 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1205 06:40:03.554154  480112 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1205 06:40:03.554158  480112 command_runner.go:130] > # collection_period = 0
	I1205 06:40:03.556162  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527241832Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1205 06:40:03.556207  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527278608Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1205 06:40:03.556230  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527308122Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1205 06:40:03.556255  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.52733264Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1205 06:40:03.556280  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527409367Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:40:03.556295  480112 command_runner.go:130] ! time="2025-12-05T06:40:03.527814951Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1205 06:40:03.556306  480112 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 06:40:03.556383  480112 cni.go:84] Creating CNI manager for ""
	I1205 06:40:03.556397  480112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:40:03.556420  480112 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:40:03.556447  480112 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:40:03.556582  480112 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:40:03.556659  480112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:40:03.563611  480112 command_runner.go:130] > kubeadm
	I1205 06:40:03.563630  480112 command_runner.go:130] > kubectl
	I1205 06:40:03.563636  480112 command_runner.go:130] > kubelet
	I1205 06:40:03.564590  480112 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:40:03.564681  480112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:40:03.572146  480112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:40:03.584914  480112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:40:03.598402  480112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 06:40:03.610806  480112 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:40:03.614247  480112 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1205 06:40:03.614336  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:03.749526  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:04.526831  480112 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:40:04.526920  480112 certs.go:195] generating shared ca certs ...
	I1205 06:40:04.526970  480112 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:04.527146  480112 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:40:04.527262  480112 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:40:04.527298  480112 certs.go:257] generating profile certs ...
	I1205 06:40:04.527454  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:40:04.527572  480112 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:40:04.527654  480112 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:40:04.527683  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 06:40:04.527717  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 06:40:04.527750  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 06:40:04.527779  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 06:40:04.527812  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 06:40:04.527845  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 06:40:04.527901  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 06:40:04.527942  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 06:40:04.528018  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:40:04.528084  480112 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:40:04.528110  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:40:04.528175  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:40:04.528223  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:40:04.528266  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:40:04.528351  480112 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:40:04.528416  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.528448  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem -> /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.528484  480112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.529122  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:40:04.549434  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:40:04.568942  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:40:04.588032  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:40:04.616779  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:40:04.636137  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:40:04.655504  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:40:04.673755  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:40:04.692822  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:40:04.711199  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:40:04.730794  480112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:40:04.748559  480112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:40:04.762229  480112 ssh_runner.go:195] Run: openssl version
	I1205 06:40:04.768327  480112 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1205 06:40:04.768697  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.776287  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:40:04.784133  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788189  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788221  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.788277  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:40:04.829541  480112 command_runner.go:130] > b5213941
	I1205 06:40:04.829985  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:40:04.837884  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.845797  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:40:04.853974  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.857841  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858230  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.858295  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:40:04.900152  480112 command_runner.go:130] > 51391683
	I1205 06:40:04.900696  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:40:04.908660  480112 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.916381  480112 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:40:04.924345  480112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928449  480112 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928489  480112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.928538  480112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:40:04.969475  480112 command_runner.go:130] > 3ec20f2e
	I1205 06:40:04.969979  480112 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:40:04.977627  480112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981676  480112 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:40:04.981703  480112 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 06:40:04.981710  480112 command_runner.go:130] > Device: 259,1	Inode: 1046940     Links: 1
	I1205 06:40:04.981717  480112 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 06:40:04.981724  480112 command_runner.go:130] > Access: 2025-12-05 06:35:56.052204819 +0000
	I1205 06:40:04.981729  480112 command_runner.go:130] > Modify: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981735  480112 command_runner.go:130] > Change: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981741  480112 command_runner.go:130] >  Birth: 2025-12-05 06:31:51.389194081 +0000
	I1205 06:40:04.981812  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:40:05.025511  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.026281  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:40:05.067472  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.067923  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:40:05.109199  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.110439  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:40:05.151291  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.151789  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:40:05.192630  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.193112  480112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:40:05.234917  480112 command_runner.go:130] > Certificate will not expire
	I1205 06:40:05.235493  480112 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:40:05.235576  480112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:40:05.235658  480112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:40:05.274773  480112 cri.go:89] found id: ""
	I1205 06:40:05.274854  480112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:40:05.284543  480112 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 06:40:05.284569  480112 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 06:40:05.284576  480112 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 06:40:05.284587  480112 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:40:05.284593  480112 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:40:05.284641  480112 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:40:05.293745  480112 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:40:05.294169  480112 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-787602" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.294277  480112 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-441321/kubeconfig needs updating (will repair): [kubeconfig missing "functional-787602" cluster setting kubeconfig missing "functional-787602" context setting]
	I1205 06:40:05.294658  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.295082  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.295239  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.295723  480112 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 06:40:05.295760  480112 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 06:40:05.295766  480112 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 06:40:05.295771  480112 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 06:40:05.295779  480112 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 06:40:05.296148  480112 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:40:05.296228  480112 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1205 06:40:05.305058  480112 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1205 06:40:05.305103  480112 kubeadm.go:602] duration metric: took 20.504477ms to restartPrimaryControlPlane
	I1205 06:40:05.305113  480112 kubeadm.go:403] duration metric: took 69.632192ms to StartCluster
	I1205 06:40:05.305127  480112 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305185  480112 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.305773  480112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:40:05.305969  480112 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 06:40:05.306285  480112 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:40:05.306340  480112 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 06:40:05.306433  480112 addons.go:70] Setting storage-provisioner=true in profile "functional-787602"
	I1205 06:40:05.306448  480112 addons.go:239] Setting addon storage-provisioner=true in "functional-787602"
	I1205 06:40:05.306452  480112 addons.go:70] Setting default-storageclass=true in profile "functional-787602"
	I1205 06:40:05.306473  480112 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-787602"
	I1205 06:40:05.306480  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.306771  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.306997  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.310651  480112 out.go:179] * Verifying Kubernetes components...
	I1205 06:40:05.313979  480112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:40:05.339795  480112 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:40:05.340007  480112 kapi.go:59] client config for functional-787602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 06:40:05.340282  480112 addons.go:239] Setting addon default-storageclass=true in "functional-787602"
	I1205 06:40:05.340312  480112 host.go:66] Checking if "functional-787602" exists ...
	I1205 06:40:05.340728  480112 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:40:05.361959  480112 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 06:40:05.364893  480112 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.364921  480112 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 06:40:05.364987  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.384451  480112 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:05.384479  480112 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 06:40:05.384563  480112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:40:05.411372  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.432092  480112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:40:05.510112  480112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:40:05.550609  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:05.557147  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.275527  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275618  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275677  480112 retry.go:31] will retry after 247.926554ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275753  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.275786  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275814  480112 retry.go:31] will retry after 139.276641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.275869  480112 node_ready.go:35] waiting up to 6m0s for node "functional-787602" to be "Ready" ...
	I1205 06:40:06.275986  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.276069  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.276382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.415646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.474935  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.474981  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.475001  480112 retry.go:31] will retry after 366.421161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.524197  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.584795  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.584843  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.584873  480112 retry.go:31] will retry after 312.76439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.776120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:06.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:06.776655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:06.841962  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:06.898526  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:06.904086  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.904127  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.904149  480112 retry.go:31] will retry after 740.273906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.959857  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:06.963461  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:06.963497  480112 retry.go:31] will retry after 759.965783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.276975  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.277072  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.277469  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.645230  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:07.705790  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.705833  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.705854  480112 retry.go:31] will retry after 642.466008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.724048  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:07.776045  480112 type.go:168] "Request Body" body=""
	I1205 06:40:07.776157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:07.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:07.791584  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:07.795338  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:07.795382  480112 retry.go:31] will retry after 614.279076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:08.276605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:08.348828  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:08.405271  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.408500  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.408576  480112 retry.go:31] will retry after 1.343995427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.410740  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:08.473489  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:08.473541  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.473564  480112 retry.go:31] will retry after 1.078913702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:08.777094  480112 type.go:168] "Request Body" body=""
	I1205 06:40:08.777222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:08.777651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.276356  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.276453  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.276780  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.553646  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:09.614016  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.614089  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.614116  480112 retry.go:31] will retry after 2.379780781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.753405  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:09.777031  480112 type.go:168] "Request Body" body=""
	I1205 06:40:09.777132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:09.777482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:09.813171  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:09.813239  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:09.813272  480112 retry.go:31] will retry after 1.978465808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:10.276816  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.277257  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:10.277348  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:10.776020  480112 type.go:168] "Request Body" body=""
	I1205 06:40:10.776102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:10.776363  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.276081  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.276155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:40:11.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:11.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:11.791876  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:11.850961  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:11.851011  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.851047  480112 retry.go:31] will retry after 1.715194365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:11.994161  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:12.058032  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:12.058079  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.058098  480112 retry.go:31] will retry after 2.989540966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:12.276377  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.276451  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.276701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:12.776111  480112 type.go:168] "Request Body" body=""
	I1205 06:40:12.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:12.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:12.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:13.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.276532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:13.567026  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:13.620219  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:13.623514  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.623554  480112 retry.go:31] will retry after 5.458226005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:13.776806  480112 type.go:168] "Request Body" body=""
	I1205 06:40:13.776876  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:13.777207  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.277043  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.277126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.277411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:14.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:40:14.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:14.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:14.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:15.048089  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:15.111053  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:15.111091  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.111112  480112 retry.go:31] will retry after 5.631155228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:15.276375  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.276443  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:15.776648  480112 type.go:168] "Request Body" body=""
	I1205 06:40:15.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:15.777039  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.276857  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.276930  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:16.776968  480112 type.go:168] "Request Body" body=""
	I1205 06:40:16.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:16.777300  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:16.777347  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:17.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.276495  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:17.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:40:17.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:17.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.276064  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.276137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.276439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:18.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:40:18.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:18.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:19.082075  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:19.143244  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:19.143293  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.143314  480112 retry.go:31] will retry after 4.646546475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:19.276638  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.276712  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.277087  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:19.277141  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:19.776926  480112 type.go:168] "Request Body" body=""
	I1205 06:40:19.777007  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:19.777341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.743196  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:20.776726  480112 type.go:168] "Request Body" body=""
	I1205 06:40:20.776805  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:20.777070  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:20.801108  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:20.801144  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:20.801162  480112 retry.go:31] will retry after 9.136671028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:21.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.276973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.277268  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:21.277311  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:21.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:40:21.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:21.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.276249  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:22.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:40:22.776313  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:22.776619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.276136  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:23.776172  480112 type.go:168] "Request Body" body=""
	I1205 06:40:23.776265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:23.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:23.776664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:23.790980  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:23.852305  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:23.852351  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:23.852373  480112 retry.go:31] will retry after 4.852638111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:24.276878  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.276951  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.277225  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:24.776145  480112 type.go:168] "Request Body" body=""
	I1205 06:40:24.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:24.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.276631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:25.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:40:25.776628  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:25.776885  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:25.776924  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:26.276685  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.276766  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.277101  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:26.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:40:26.777045  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:26.777350  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.277008  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.277082  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:27.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:40:27.776144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:27.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:28.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:28.276571  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:28.705256  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:28.766465  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:28.766519  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.766541  480112 retry.go:31] will retry after 15.718503653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:28.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:40:28.776721  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:28.777014  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.276890  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.276967  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.277333  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.776501  480112 type.go:168] "Request Body" body=""
	I1205 06:40:29.776578  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:29.776920  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:29.938493  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:30.002212  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:30.002257  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.002277  480112 retry.go:31] will retry after 5.082732051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:30.276542  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.276613  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.276880  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:30.276935  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:30.776666  480112 type.go:168] "Request Body" body=""
	I1205 06:40:30.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:30.777100  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.276768  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.276846  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.277194  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:31.776934  480112 type.go:168] "Request Body" body=""
	I1205 06:40:31.777009  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:31.777271  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.276395  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.276491  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.276813  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:32.776164  480112 type.go:168] "Request Body" body=""
	I1205 06:40:32.776245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:32.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:32.776649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:33.776151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:33.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:33.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.276378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:34.776431  480112 type.go:168] "Request Body" body=""
	I1205 06:40:34.776497  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:34.776750  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:34.776788  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:35.085301  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:35.148531  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:35.152882  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.152918  480112 retry.go:31] will retry after 11.086200752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:35.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.276603  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:35.777106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:35.777182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:35.777443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:36.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:40:36.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:36.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:37.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.276271  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:37.276633  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:37.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:40:37.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:37.776452  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:38.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:40:38.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:38.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.276021  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.276100  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.276361  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:39.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:40:39.776193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:39.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:39.776575  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:40.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:40.776012  480112 type.go:168] "Request Body" body=""
	I1205 06:40:40.776078  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:40.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.276540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:41.776119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:41.776197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:41.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:42.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.276583  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:42.276631  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:42.776277  480112 type.go:168] "Request Body" body=""
	I1205 06:40:42.776365  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:42.776691  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:43.776091  480112 type.go:168] "Request Body" body=""
	I1205 06:40:43.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:43.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.276566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:44.485984  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:40:44.554072  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:44.557893  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.557927  480112 retry.go:31] will retry after 22.628614414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:44.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:40:44.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:44.776735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:44.776781  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:45.276131  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:45.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:40:45.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:45.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.239320  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:40:46.276723  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.277080  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:46.296820  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:40:46.296888  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.296909  480112 retry.go:31] will retry after 16.475007469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:40:46.776108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:46.776261  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:46.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:47.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.276550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:47.276621  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:40:47.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:47.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.276087  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.276413  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:48.776153  480112 type.go:168] "Request Body" body=""
	I1205 06:40:48.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:48.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:49.276252  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:49.276666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:49.776457  480112 type.go:168] "Request Body" body=""
	I1205 06:40:49.776539  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:49.776814  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.276579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:50.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:40:50.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:50.776648  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.276069  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.276140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:40:51.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:51.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:51.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:52.276278  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.276689  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:52.776334  480112 type.go:168] "Request Body" body=""
	I1205 06:40:52.776409  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:52.776733  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.276108  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.276508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:53.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:40:53.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:53.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:54.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.276187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.276515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:54.276568  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:54.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:40:54.776530  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:54.776853  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.276690  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.276781  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.277125  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:55.777039  480112 type.go:168] "Request Body" body=""
	I1205 06:40:55.777119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:55.777385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.276092  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.276176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.276480  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:56.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:40:56.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:56.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:56.776586  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:57.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:57.776222  480112 type.go:168] "Request Body" body=""
	I1205 06:40:57.776307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:57.776615  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.276119  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.276533  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:58.776243  480112 type.go:168] "Request Body" body=""
	I1205 06:40:58.776317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:58.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:40:58.776608  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:40:59.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.276178  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:40:59.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:40:59.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:40:59.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:00.776216  480112 type.go:168] "Request Body" body=""
	I1205 06:41:00.776291  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:00.776616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:00.776689  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:01.276381  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.276456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.276781  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:01.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:01.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:01.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.276128  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:02.772181  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:02.776748  480112 type.go:168] "Request Body" body=""
	I1205 06:41:02.776818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:02.777092  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:02.777132  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:02.828748  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:02.831873  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:02.831907  480112 retry.go:31] will retry after 23.767145255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:03.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.276184  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:03.776136  480112 type.go:168] "Request Body" body=""
	I1205 06:41:03.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:03.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.276224  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.276300  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:04.776644  480112 type.go:168] "Request Body" body=""
	I1205 06:41:04.776715  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:04.777004  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:05.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.276924  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.277214  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:05.277261  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:05.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:41:05.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:05.776532  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.276212  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:06.776196  480112 type.go:168] "Request Body" body=""
	I1205 06:41:06.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:06.776601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:07.187370  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:07.246801  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:07.246844  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.246863  480112 retry.go:31] will retry after 35.018877023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 06:41:07.277002  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.277431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:07.277488  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:07.777040  480112 type.go:168] "Request Body" body=""
	I1205 06:41:07.777122  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:07.777377  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:41:08.776342  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:08.776663  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.276083  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.276449  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:09.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:09.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:09.776565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:09.776619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:10.276305  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.276400  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.276764  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:10.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:10.776250  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:10.776563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.276113  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:11.776174  480112 type.go:168] "Request Body" body=""
	I1205 06:41:11.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:11.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:12.276067  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.276164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.276478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:12.276527  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:12.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:41:12.776235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:12.776538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:13.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:41:13.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:13.776688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:14.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.276498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:14.276544  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:14.776236  480112 type.go:168] "Request Body" body=""
	I1205 06:41:14.776308  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:14.776593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.276150  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.276414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:15.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:41:15.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:15.776481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.276073  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.276430  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:16.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:41:16.776181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:16.776438  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:16.776478  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:17.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.276289  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.276578  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:17.776257  480112 type.go:168] "Request Body" body=""
	I1205 06:41:17.776333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:17.776671  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.276301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.276556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:18.776250  480112 type.go:168] "Request Body" body=""
	I1205 06:41:18.776326  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:18.776636  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:18.776691  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:19.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.276800  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:19.776580  480112 type.go:168] "Request Body" body=""
	I1205 06:41:19.776660  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:19.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.276771  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.276848  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.277227  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:20.777060  480112 type.go:168] "Request Body" body=""
	I1205 06:41:20.777195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:20.777544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:20.777604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:21.276075  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.276451  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:21.776144  480112 type.go:168] "Request Body" body=""
	I1205 06:41:21.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:21.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.276241  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.276600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:22.776229  480112 type.go:168] "Request Body" body=""
	I1205 06:41:22.776301  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:22.776581  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:23.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.276514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:23.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:23.776148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:23.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:23.776580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:24.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:41:24.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:24.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:25.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.276273  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:25.276662  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:25.776023  480112 type.go:168] "Request Body" body=""
	I1205 06:41:25.776090  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:25.776414  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.276503  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:26.599995  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 06:41:26.657664  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660860  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:26.660976  480112 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:26.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:26.776182  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:26.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.276161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.276457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:27.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:41:27.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:27.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:27.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:28.776230  480112 type.go:168] "Request Body" body=""
	I1205 06:41:28.776304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:28.776618  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.276325  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.276412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.276735  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:29.776649  480112 type.go:168] "Request Body" body=""
	I1205 06:41:29.776745  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:29.777083  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:29.777135  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:30.276977  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.277054  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.277385  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:30.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:41:30.776171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:30.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.276794  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.276886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.277179  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:31.776939  480112 type.go:168] "Request Body" body=""
	I1205 06:41:31.777016  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:31.777293  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:31.777332  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:32.276045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:32.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:32.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:32.776514  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.276086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:33.776169  480112 type.go:168] "Request Body" body=""
	I1205 06:41:33.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:33.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:34.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.276702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:34.276756  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:34.776367  480112 type.go:168] "Request Body" body=""
	I1205 06:41:34.776450  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:34.776713  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.276380  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.276460  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.276788  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:35.776775  480112 type.go:168] "Request Body" body=""
	I1205 06:41:35.776849  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:35.777195  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:36.276774  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.276844  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.277103  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:36.277142  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:36.776970  480112 type.go:168] "Request Body" body=""
	I1205 06:41:36.777059  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:36.777387  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.276088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.276166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:37.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:37.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:37.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:38.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:41:38.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:38.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:38.776611  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:39.276263  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.276598  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:39.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:41:39.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:39.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.276217  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.276599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:40.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:40.776292  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:40.776600  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:40.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:41.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:41.776294  480112 type.go:168] "Request Body" body=""
	I1205 06:41:41.776370  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:41.776711  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.266330  480112 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1205 06:41:42.277622  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.277694  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.277960  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:42.360709  480112 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361696  480112 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1205 06:41:42.361795  480112 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1205 06:41:42.365007  480112 out.go:179] * Enabled addons: 
	I1205 06:41:42.368666  480112 addons.go:530] duration metric: took 1m37.062317768s for enable addons: enabled=[]
	I1205 06:41:42.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:41:42.776221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:42.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:43.276187  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.276263  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.276622  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:43.276733  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:43.776170  480112 type.go:168] "Request Body" body=""
	I1205 06:41:43.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:43.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.276497  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:44.776430  480112 type.go:168] "Request Body" body=""
	I1205 06:41:44.776531  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:44.776876  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:45.276870  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.277032  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.277745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:45.277837  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:45.776642  480112 type.go:168] "Request Body" body=""
	I1205 06:41:45.776716  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:45.777050  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.276852  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.276927  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.277279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:46.777045  480112 type.go:168] "Request Body" body=""
	I1205 06:41:46.777126  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:46.777396  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.276482  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:47.776088  480112 type.go:168] "Request Body" body=""
	I1205 06:41:47.776166  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:47.776536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:47.776604  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:48.276249  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.276340  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.276655  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:48.776130  480112 type.go:168] "Request Body" body=""
	I1205 06:41:48.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:48.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:49.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:41:49.776156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:49.776445  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:50.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.276229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.276573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:50.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:50.776285  480112 type.go:168] "Request Body" body=""
	I1205 06:41:50.776369  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:50.776728  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.276429  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.276528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.276792  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:51.776116  480112 type.go:168] "Request Body" body=""
	I1205 06:41:51.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:51.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.276150  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:52.776085  480112 type.go:168] "Request Body" body=""
	I1205 06:41:52.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:52.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:52.776560  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:53.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.276685  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:53.776167  480112 type.go:168] "Request Body" body=""
	I1205 06:41:53.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:53.776608  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.276364  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.276706  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:54.776729  480112 type.go:168] "Request Body" body=""
	I1205 06:41:54.776832  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:54.777167  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:54.777215  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:55.276948  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.277018  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.277349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:55.776048  480112 type.go:168] "Request Body" body=""
	I1205 06:41:55.776114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:55.776379  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.276137  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:56.776084  480112 type.go:168] "Request Body" body=""
	I1205 06:41:56.776165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:56.776515  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:57.276044  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.276370  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:57.276409  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:57.776074  480112 type.go:168] "Request Body" body=""
	I1205 06:41:57.776175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:57.776534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.276474  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:58.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:41:58.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:58.776435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:41:59.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.276461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:41:59.276501  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:41:59.776413  480112 type.go:168] "Request Body" body=""
	I1205 06:41:59.776495  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:41:59.776828  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.276523  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.276611  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.276928  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:00.776709  480112 type.go:168] "Request Body" body=""
	I1205 06:42:00.776788  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:00.777104  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:01.276864  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.276950  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.277320  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:01.277377  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:01.776897  480112 type.go:168] "Request Body" body=""
	I1205 06:42:01.776970  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:01.777279  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.276047  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.276127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:02.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:02.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:02.776476  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:03.277070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.277155  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.277407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:03.277449  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:03.776150  480112 type.go:168] "Request Body" body=""
	I1205 06:42:03.776224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:03.776586  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.276303  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.276387  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.276681  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:04.776711  480112 type.go:168] "Request Body" body=""
	I1205 06:42:04.776794  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:04.782759  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1205 06:42:05.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.276619  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:05.776366  480112 type.go:168] "Request Body" body=""
	I1205 06:42:05.776442  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:05.776784  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:05.776836  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:06.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.276512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:06.776160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:06.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:06.776573  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.276332  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.276414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.276772  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:07.776264  480112 type.go:168] "Request Body" body=""
	I1205 06:42:07.776337  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:07.776591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:08.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.276230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:08.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:08.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:42:08.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:08.776787  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.276079  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.276425  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:09.776266  480112 type.go:168] "Request Body" body=""
	I1205 06:42:09.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:09.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:10.276403  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.276479  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.276767  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:10.276814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:10.776451  480112 type.go:168] "Request Body" body=""
	I1205 06:42:10.776520  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:10.776795  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.276636  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.276714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.277054  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:11.776915  480112 type.go:168] "Request Body" body=""
	I1205 06:42:11.776994  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:11.777329  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.276040  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.276119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.276407  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:12.776109  480112 type.go:168] "Request Body" body=""
	I1205 06:42:12.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:12.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:12.776597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:13.277062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.277174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.277498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:13.776187  480112 type.go:168] "Request Body" body=""
	I1205 06:42:13.776262  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:13.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.276253  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.276331  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.276688  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:14.776496  480112 type.go:168] "Request Body" body=""
	I1205 06:42:14.776570  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:14.776890  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:14.776948  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:15.276665  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.276733  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.277042  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:15.776898  480112 type.go:168] "Request Body" body=""
	I1205 06:42:15.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:15.777305  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.276026  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.276107  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:16.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:16.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:16.776492  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:17.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.276185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:17.276576  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:17.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:17.776357  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:17.776675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.276109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.276436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:42:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:18.776525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:19.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.276239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:19.276619  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:19.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:42:19.776483  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:19.776740  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.276234  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:20.776163  480112 type.go:168] "Request Body" body=""
	I1205 06:42:20.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:20.776556  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.276156  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:21.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:42:21.776270  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:21.776585  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:21.776630  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:22.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:22.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:42:22.776267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:22.776568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.276270  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.276346  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.276675  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:23.776093  480112 type.go:168] "Request Body" body=""
	I1205 06:42:23.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:23.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:24.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.276435  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:24.276482  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:24.776113  480112 type.go:168] "Request Body" body=""
	I1205 06:42:24.776187  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:24.776508  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:25.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:25.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:25.776592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:26.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:26.276597  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:26.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:42:26.776223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:26.776559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.276255  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.276327  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:27.776272  480112 type.go:168] "Request Body" body=""
	I1205 06:42:27.776352  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:27.776694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.276141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:28.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:28.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:28.776441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:28.776496  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:29.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.276214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.276536  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:29.776193  480112 type.go:168] "Request Body" body=""
	I1205 06:42:29.776294  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:29.776633  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.276354  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.276958  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:30.776754  480112 type.go:168] "Request Body" body=""
	I1205 06:42:30.776886  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:30.777216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:30.777271  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:31.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.276997  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.277353  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:31.776905  480112 type.go:168] "Request Body" body=""
	I1205 06:42:31.776973  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:31.777239  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.276031  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.276129  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:32.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:42:32.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:32.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:33.276005  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.276326  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:33.276364  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:33.776056  480112 type.go:168] "Request Body" body=""
	I1205 06:42:33.776130  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:33.776489  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.276252  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.276601  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:34.776105  480112 type.go:168] "Request Body" body=""
	I1205 06:42:34.776170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:34.776439  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:35.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.276502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:35.276548  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:35.776412  480112 type.go:168] "Request Body" body=""
	I1205 06:42:35.776485  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:35.776805  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.276101  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.276193  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.276468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:36.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:42:36.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:36.776512  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:37.276094  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.276180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:37.276578  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:37.776073  480112 type.go:168] "Request Body" body=""
	I1205 06:42:37.776140  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:37.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.276139  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.276594  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:38.776275  480112 type.go:168] "Request Body" body=""
	I1205 06:42:38.776354  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:38.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.276121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.276191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.276447  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:39.776154  480112 type.go:168] "Request Body" body=""
	I1205 06:42:39.776231  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:39.776555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:39.776610  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.276511  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:40.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:42:40.776168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:40.776483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:41.776311  480112 type.go:168] "Request Body" body=""
	I1205 06:42:41.776412  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:41.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:41.776800  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:42.276460  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.276533  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.276835  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:42.776147  480112 type.go:168] "Request Body" body=""
	I1205 06:42:42.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.276274  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.276718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:43.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:42:43.776371  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:43.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:44.276399  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.276475  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.276774  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:44.276818  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:44.776823  480112 type.go:168] "Request Body" body=""
	I1205 06:42:44.776896  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:44.777260  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.277015  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.277165  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.277467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:45.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:42:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:45.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.276290  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.276372  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.276755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:46.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:42:46.776351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:46.776696  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:46.776865  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:47.276162  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.276562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:47.776400  480112 type.go:168] "Request Body" body=""
	I1205 06:42:47.776503  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:47.777026  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.276644  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.276723  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.276978  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:48.776820  480112 type.go:168] "Request Body" body=""
	I1205 06:42:48.776899  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:48.777234  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:48.777287  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:49.277045  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.277135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.277475  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:49.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:42:49.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:49.776484  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.276042  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.276116  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.276446  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:50.776052  480112 type.go:168] "Request Body" body=""
	I1205 06:42:50.776127  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:50.776478  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:51.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:51.276627  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:51.776127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:51.776200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:51.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.276118  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.276504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:52.776090  480112 type.go:168] "Request Body" body=""
	I1205 06:42:52.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:52.776470  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.276544  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:53.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:42:53.776227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:53.776595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:53.776655  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:54.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.276188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.276499  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:54.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:42:54.776278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:54.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:55.776393  480112 type.go:168] "Request Body" body=""
	I1205 06:42:55.776463  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:55.776718  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:55.776760  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:56.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:56.776279  480112 type.go:168] "Request Body" body=""
	I1205 06:42:56.776355  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:56.776683  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.276419  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.276709  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:57.776121  480112 type.go:168] "Request Body" body=""
	I1205 06:42:57.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:57.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:58.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:42:58.276716  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:42:58.776027  480112 type.go:168] "Request Body" body=""
	I1205 06:42:58.776099  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:58.776349  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.276501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:42:59.776817  480112 type.go:168] "Request Body" body=""
	I1205 06:42:59.776902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:42:59.777233  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:00.277352  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.277456  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.277768  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:00.277814  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:00.776195  480112 type.go:168] "Request Body" body=""
	I1205 06:43:00.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:00.776654  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:01.776901  480112 type.go:168] "Request Body" body=""
	I1205 06:43:01.776971  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:01.777244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.277063  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.277162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.277496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:02.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:02.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:02.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:02.776546  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:03.276099  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.276179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:03.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:03.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:03.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.276221  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.276644  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:04.776562  480112 type.go:168] "Request Body" body=""
	I1205 06:43:04.776637  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:04.776900  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:04.776951  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:05.276709  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.276791  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:05.776977  480112 type.go:168] "Request Body" body=""
	I1205 06:43:05.777064  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:05.777431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.276168  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:06.776100  480112 type.go:168] "Request Body" body=""
	I1205 06:43:06.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:06.776494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:07.276116  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.276221  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:07.276607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:07.776246  480112 type.go:168] "Request Body" body=""
	I1205 06:43:07.776316  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:07.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.276554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:08.776269  480112 type.go:168] "Request Body" body=""
	I1205 06:43:08.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:08.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.276194  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.276266  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.276528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:09.776304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:09.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:09.776699  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:09.776757  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:10.276472  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.276560  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.276905  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:10.776647  480112 type.go:168] "Request Body" body=""
	I1205 06:43:10.776717  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:10.776986  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.276873  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.277209  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:11.777023  480112 type.go:168] "Request Body" body=""
	I1205 06:43:11.777098  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:11.777457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:11.777510  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:12.276089  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.276172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.276429  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:12.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:12.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:12.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.276276  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.276351  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.276678  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:13.776070  480112 type.go:168] "Request Body" body=""
	I1205 06:43:13.776139  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:13.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:14.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.276507  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:14.276551  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:14.776223  480112 type.go:168] "Request Body" body=""
	I1205 06:43:14.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:14.776621  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.276090  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.276171  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.276486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:15.776369  480112 type.go:168] "Request Body" body=""
	I1205 06:43:15.776445  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:15.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:16.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.276607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:16.276663  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:16.776324  480112 type.go:168] "Request Body" body=""
	I1205 06:43:16.776396  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:16.776758  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.276146  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:17.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:17.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:17.776575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.276431  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:18.776142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:18.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:18.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:18.776607  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:19.276304  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.276385  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.276748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:19.776687  480112 type.go:168] "Request Body" body=""
	I1205 06:43:19.776760  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:19.777008  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.276846  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.276923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.277244  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:20.777028  480112 type.go:168] "Request Body" body=""
	I1205 06:43:20.777103  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:20.777448  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:20.777499  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:21.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.276519  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:21.776177  480112 type.go:168] "Request Body" body=""
	I1205 06:43:21.776259  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:21.776596  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.276311  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.276394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.276742  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:22.776321  480112 type.go:168] "Request Body" body=""
	I1205 06:43:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:22.776716  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:23.276454  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.276541  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.276962  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:23.277021  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:23.776836  480112 type.go:168] "Request Body" body=""
	I1205 06:43:23.776923  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:23.777277  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.277022  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.277402  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:24.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:43:24.776322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:24.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.276350  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.276424  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.276715  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:25.776566  480112 type.go:168] "Request Body" body=""
	I1205 06:43:25.776639  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:25.776913  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:25.776962  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:26.276795  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.276868  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.277314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:26.776043  480112 type.go:168] "Request Body" body=""
	I1205 06:43:26.776120  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:26.776468  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.276458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:27.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:43:27.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:27.776490  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:28.276190  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.276267  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:28.276648  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:28.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:43:28.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:28.776457  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.276201  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:29.776141  480112 type.go:168] "Request Body" body=""
	I1205 06:43:29.776229  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:29.776584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.276477  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:30.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:43:30.776218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:30.776530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:30.776585  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:31.276280  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.276358  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.276684  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:31.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:43:31.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:31.776434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.276575  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:32.776293  480112 type.go:168] "Request Body" body=""
	I1205 06:43:32.776375  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:32.776708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:32.776765  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:33.276072  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.276143  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.276404  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:33.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:43:33.776212  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:33.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.276299  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.276386  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.276745  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:34.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:43:34.776389  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:34.776645  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:35.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.276233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:35.276620  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:35.776302  480112 type.go:168] "Request Body" body=""
	I1205 06:43:35.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:35.776730  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.276151  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.276228  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.276513  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:36.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:36.776244  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:36.776582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.276165  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:37.776217  480112 type.go:168] "Request Body" body=""
	I1205 06:43:37.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:37.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:37.776588  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:38.276170  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.276253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.276591  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:38.776284  480112 type.go:168] "Request Body" body=""
	I1205 06:43:38.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:38.776702  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.276158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.276453  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:39.776295  480112 type.go:168] "Request Body" body=""
	I1205 06:43:39.776378  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:39.776693  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:39.776750  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:40.276143  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.276217  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.276537  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:40.776079  480112 type.go:168] "Request Body" body=""
	I1205 06:43:40.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:40.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.276147  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.276224  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.276565  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:41.776268  480112 type.go:168] "Request Body" body=""
	I1205 06:43:41.776350  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:41.776641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:42.276112  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.276194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.276467  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:42.276522  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:42.776162  480112 type.go:168] "Request Body" body=""
	I1205 06:43:42.776236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:42.776576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.276319  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.276422  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.276770  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:43.776459  480112 type.go:168] "Request Body" body=""
	I1205 06:43:43.776529  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:43.776862  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:44.276624  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.276703  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.277019  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:44.277073  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:44.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:43:44.776964  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:44.777314  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.276046  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.276131  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.276394  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:45.776391  480112 type.go:168] "Request Body" body=""
	I1205 06:43:45.776465  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:45.776748  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.276420  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.276518  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.276883  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:46.776662  480112 type.go:168] "Request Body" body=""
	I1205 06:43:46.776730  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:46.776998  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:46.777043  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:47.276766  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.276837  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.277173  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:47.776961  480112 type.go:168] "Request Body" body=""
	I1205 06:43:47.777038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:47.777378  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.277033  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.277102  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.277382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:48.776065  480112 type.go:168] "Request Body" body=""
	I1205 06:43:48.776137  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:48.776471  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:49.276093  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.276177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:49.276562  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:49.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:43:49.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:49.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.276235  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.276637  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:50.776106  480112 type.go:168] "Request Body" body=""
	I1205 06:43:50.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:50.776528  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.276062  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.276152  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.276423  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:51.776133  480112 type.go:168] "Request Body" body=""
	I1205 06:43:51.776208  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:51.776605  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:52.276271  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.276356  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.276672  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:52.776325  480112 type.go:168] "Request Body" body=""
	I1205 06:43:52.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:52.776729  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.276218  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.276563  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:53.776158  480112 type.go:168] "Request Body" body=""
	I1205 06:43:53.776239  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:53.776561  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:54.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.276304  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.276616  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:54.276664  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:54.776326  480112 type.go:168] "Request Body" body=""
	I1205 06:43:54.776403  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:54.776723  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.276353  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.276436  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.276747  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:55.776688  480112 type.go:168] "Request Body" body=""
	I1205 06:43:55.776759  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:55.777015  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:56.276831  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.276902  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.277216  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:56.277268  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:56.776881  480112 type.go:168] "Request Body" body=""
	I1205 06:43:56.776955  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:56.777297  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.276004  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.276075  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:57.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:43:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:57.776535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.276309  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:58.776346  480112 type.go:168] "Request Body" body=""
	I1205 06:43:58.776416  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:58.776677  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:43:58.776715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:43:59.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.276197  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:43:59.776448  480112 type.go:168] "Request Body" body=""
	I1205 06:43:59.776524  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:43:59.776869  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.276462  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.276555  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.276854  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:00.776547  480112 type.go:168] "Request Body" body=""
	I1205 06:44:00.776618  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:00.776940  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:00.776993  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:01.276491  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.276575  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.276927  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:01.776492  480112 type.go:168] "Request Body" body=""
	I1205 06:44:01.776564  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:01.776833  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.276163  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.276236  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.276584  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:02.776165  480112 type.go:168] "Request Body" body=""
	I1205 06:44:02.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:02.776570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:03.276082  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.276153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:03.276467  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:03.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:03.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:03.776577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.276109  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:04.776096  480112 type.go:168] "Request Body" body=""
	I1205 06:44:04.776195  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:04.776506  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:05.276414  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.276498  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.276881  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:05.276923  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:05.776110  480112 type.go:168] "Request Body" body=""
	I1205 06:44:05.776194  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:05.776574  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.276351  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.276427  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.276786  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:06.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:44:06.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:06.776566  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.276285  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.276368  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.276703  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:07.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:07.776153  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:07.776460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:07.776509  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:08.276197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.276276  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.276613  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:08.776318  480112 type.go:168] "Request Body" body=""
	I1205 06:44:08.776428  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:08.776751  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.276440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:09.776197  480112 type.go:168] "Request Body" body=""
	I1205 06:44:09.776287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:09.776623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:09.776680  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:10.276196  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.276274  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.276577  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:10.776259  480112 type.go:168] "Request Body" body=""
	I1205 06:44:10.776330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:10.776668  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.276138  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.276219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:11.776276  480112 type.go:168] "Request Body" body=""
	I1205 06:44:11.776353  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:11.776679  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:11.776729  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:12.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.276569  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:12.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:44:12.776183  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:12.776496  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.276219  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.276630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:13.776971  480112 type.go:168] "Request Body" body=""
	I1205 06:44:13.777044  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:13.777316  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:13.777359  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:14.276035  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.276110  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.276443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:14.776134  480112 type.go:168] "Request Body" body=""
	I1205 06:44:14.776211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:14.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.276120  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.276190  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:15.776155  480112 type.go:168] "Request Body" body=""
	I1205 06:44:15.776242  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:15.776630  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:16.276226  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.276312  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.276651  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:16.276712  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:16.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:44:16.776158  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:16.776479  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.276169  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.276246  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.276560  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:17.776291  480112 type.go:168] "Request Body" body=""
	I1205 06:44:17.776366  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:17.776701  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.276007  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.276073  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.276319  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:18.776002  480112 type.go:168] "Request Body" body=""
	I1205 06:44:18.776084  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:18.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:18.776517  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:19.276181  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.276257  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:19.776049  480112 type.go:168] "Request Body" body=""
	I1205 06:44:19.776119  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:19.776371  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.276146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:20.776080  480112 type.go:168] "Request Body" body=""
	I1205 06:44:20.776160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:20.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:20.776581  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:21.276106  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.276174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.276487  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:21.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:44:21.776283  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:21.776659  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.276365  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.276438  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.276776  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:22.776225  480112 type.go:168] "Request Body" body=""
	I1205 06:44:22.776394  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:22.776811  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:22.776918  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:23.276742  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.276818  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.277175  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:23.777069  480112 type.go:168] "Request Body" body=""
	I1205 06:44:23.777161  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:23.777559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.276175  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:24.776173  480112 type.go:168] "Request Body" body=""
	I1205 06:44:24.776247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:24.776571  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:25.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.276345  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.276694  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:25.276749  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:25.776415  480112 type.go:168] "Request Body" body=""
	I1205 06:44:25.776487  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:25.776789  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.276167  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.276568  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:26.776129  480112 type.go:168] "Request Body" body=""
	I1205 06:44:26.776206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:26.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.276125  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.276213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:27.776128  480112 type.go:168] "Request Body" body=""
	I1205 06:44:27.776214  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:27.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:27.776601  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:28.276127  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:28.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:44:28.776254  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:28.776579  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.276154  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.276542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:29.776485  480112 type.go:168] "Request Body" body=""
	I1205 06:44:29.776594  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:29.776923  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:29.776981  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:30.276085  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.276157  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.276456  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:30.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:30.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:30.776542  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.276152  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.276240  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.276609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:31.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:44:31.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:31.776509  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:32.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.276215  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:32.276593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:32.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:44:32.776253  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:32.776599  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.276216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.276485  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:33.776213  480112 type.go:168] "Request Body" body=""
	I1205 06:44:33.776298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:33.776635  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:34.276158  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:34.276654  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:34.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:34.776174  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:34.776549  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:35.776308  480112 type.go:168] "Request Body" body=""
	I1205 06:44:35.776391  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:35.776737  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.276100  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.276170  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.276424  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:36.776094  480112 type.go:168] "Request Body" body=""
	I1205 06:44:36.776169  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:36.776558  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:36.776623  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:37.276145  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.276543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:37.776081  480112 type.go:168] "Request Body" body=""
	I1205 06:44:37.776159  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:37.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.276148  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.276225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.276595  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:38.776175  480112 type.go:168] "Request Body" body=""
	I1205 06:44:38.776258  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:38.776609  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:38.776666  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:39.276095  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.276167  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.276460  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:39.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:44:39.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:39.776540  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.276186  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.276264  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.276597  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:40.776207  480112 type.go:168] "Request Body" body=""
	I1205 06:44:40.776284  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:40.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:41.276157  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.276235  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.276518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:41.276567  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:41.776214  480112 type.go:168] "Request Body" body=""
	I1205 06:44:41.776290  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:41.776631  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.276210  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.276285  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:42.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:44:42.776230  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:42.776543  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:43.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.276661  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:43.276715  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:43.776156  480112 type.go:168] "Request Body" body=""
	I1205 06:44:43.776243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:43.776564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.276236  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.276330  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.276658  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:44.776699  480112 type.go:168] "Request Body" body=""
	I1205 06:44:44.776770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:44.777048  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:45.276733  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.276817  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.277141  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:45.277194  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:45.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:44:45.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:45.776557  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.276336  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.276649  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:46.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:44:46.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:46.776440  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.276545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:47.776146  480112 type.go:168] "Request Body" body=""
	I1205 06:44:47.776314  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:47.776697  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:47.776761  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:48.276091  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.276160  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.276417  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:48.776114  480112 type.go:168] "Request Body" body=""
	I1205 06:44:48.776189  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:48.776518  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.276135  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.276211  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.276541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:49.776067  480112 type.go:168] "Request Body" body=""
	I1205 06:44:49.776138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:49.776462  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:50.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.276226  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.276564  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:50.276626  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:50.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:44:50.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:50.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.276097  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:51.776099  480112 type.go:168] "Request Body" body=""
	I1205 06:44:51.776179  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:51.776465  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.276160  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.276237  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:52.776010  480112 type.go:168] "Request Body" body=""
	I1205 06:44:52.776087  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:52.776341  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:52.776389  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:53.276102  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.276192  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.276529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:53.776263  480112 type.go:168] "Request Body" body=""
	I1205 06:44:53.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:53.776669  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.276617  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:54.776645  480112 type.go:168] "Request Body" body=""
	I1205 06:44:54.776724  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:54.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:54.777107  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:55.276901  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.276974  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.277307  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:55.776239  480112 type.go:168] "Request Body" body=""
	I1205 06:44:55.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:55.776673  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.276161  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.276245  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.276580  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:56.776271  480112 type.go:168] "Request Body" body=""
	I1205 06:44:56.776343  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:56.776700  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:57.276078  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.276162  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.276411  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:57.276450  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:57.776104  480112 type.go:168] "Request Body" body=""
	I1205 06:44:57.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:57.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.276231  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.276307  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.276629  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:58.776188  480112 type.go:168] "Request Body" body=""
	I1205 06:44:58.776260  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:58.776520  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:44:59.276166  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.276248  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.276552  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:44:59.276596  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:44:59.776249  480112 type.go:168] "Request Body" body=""
	I1205 06:44:59.776324  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:44:59.776665  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.276382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.276469  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.276785  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:00.776579  480112 type.go:168] "Request Body" body=""
	I1205 06:45:00.776666  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:00.777193  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.276126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.276481  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:01.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:01.776210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:01.776510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:01.776573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:02.276144  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.276570  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:02.776132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:02.776222  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:02.776588  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.276207  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.276299  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.276642  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:03.776382  480112 type.go:168] "Request Body" body=""
	I1205 06:45:03.776473  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:03.776816  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:03.776873  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:04.276616  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.276687  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.276947  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:04.776885  480112 type.go:168] "Request Body" body=""
	I1205 06:45:04.776956  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:04.777296  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.276052  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.276135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.276493  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:05.776076  480112 type.go:168] "Request Body" body=""
	I1205 06:45:05.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:05.776382  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:06.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.276206  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.276505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:06.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:06.776221  480112 type.go:168] "Request Body" body=""
	I1205 06:45:06.776318  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:06.776607  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.276260  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.276333  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.276647  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:07.776117  480112 type.go:168] "Request Body" body=""
	I1205 06:45:07.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:07.776505  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:08.276124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.276205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.276525  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:08.276582  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:08.776068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:08.776135  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:08.776427  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.276142  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.276220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:09.776450  480112 type.go:168] "Request Body" body=""
	I1205 06:45:09.776528  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:09.776851  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:10.276606  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.276677  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.277000  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:10.277057  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:10.776657  480112 type.go:168] "Request Body" body=""
	I1205 06:45:10.776732  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:10.777046  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.276811  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.276882  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.277223  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:11.776851  480112 type.go:168] "Request Body" body=""
	I1205 06:45:11.776931  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:11.777196  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:12.276964  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.277038  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.277388  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:12.277445  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:12.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:12.776225  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:12.776553  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.276228  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.276298  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.276604  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:13.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:13.776188  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:13.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.276149  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.276227  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.276559  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:14.776072  480112 type.go:168] "Request Body" body=""
	I1205 06:45:14.776145  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:14.776458  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:14.776508  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:15.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.276200  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.276794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:15.776653  480112 type.go:168] "Request Body" body=""
	I1205 06:45:15.776744  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:15.777091  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.276715  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.276782  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.277064  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:16.776929  480112 type.go:168] "Request Body" body=""
	I1205 06:45:16.777011  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:16.777376  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:16.777433  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:17.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.276186  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.276483  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:17.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:17.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:17.776459  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.276247  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.276546  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:18.776124  480112 type.go:168] "Request Body" body=""
	I1205 06:45:18.776204  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:18.776526  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:19.276077  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.276149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.276409  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:19.276457  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:19.776140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:19.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:19.776554  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.276265  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.276339  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.276676  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:20.776202  480112 type.go:168] "Request Body" body=""
	I1205 06:45:20.776280  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:20.776606  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:21.276132  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.276210  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.276582  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:21.276636  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:21.776310  480112 type.go:168] "Request Body" body=""
	I1205 06:45:21.776390  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:21.776682  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.276070  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.276144  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.276441  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:22.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:45:22.776202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:22.776541  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:23.276240  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.276321  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.276652  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:23.276714  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:23.776086  480112 type.go:168] "Request Body" body=""
	I1205 06:45:23.776172  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:23.776500  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.276140  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.276572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:24.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:24.776624  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:24.776995  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:25.276720  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.276795  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.277096  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:25.277138  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:25.777018  480112 type.go:168] "Request Body" body=""
	I1205 06:45:25.777094  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:25.777486  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.276209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:26.776075  480112 type.go:168] "Request Body" body=""
	I1205 06:45:26.776149  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:26.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.276551  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:27.776258  480112 type.go:168] "Request Body" body=""
	I1205 06:45:27.776335  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:27.776680  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:27.776737  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:28.276211  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.276623  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:28.776331  480112 type.go:168] "Request Body" body=""
	I1205 06:45:28.776414  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:28.776707  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.276416  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.276493  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.276818  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:29.776639  480112 type.go:168] "Request Body" body=""
	I1205 06:45:29.776714  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:29.776980  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:29.777029  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:30.276781  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.276856  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.277201  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:30.776871  480112 type.go:168] "Request Body" body=""
	I1205 06:45:30.776952  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:30.777288  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.277017  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.277091  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.277360  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:31.776747  480112 type.go:168] "Request Body" body=""
	I1205 06:45:31.776819  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:31.777132  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:31.777186  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:32.276950  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.277023  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.277345  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:32.776087  480112 type.go:168] "Request Body" body=""
	I1205 06:45:32.776177  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:32.776473  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.276153  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.276223  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.276576  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:33.776178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:33.776275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:33.776686  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:34.276385  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.276462  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.276731  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:34.276780  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:34.776523  480112 type.go:168] "Request Body" body=""
	I1205 06:45:34.776596  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:34.776911  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.276784  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.276862  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.277181  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:35.776969  480112 type.go:168] "Request Body" body=""
	I1205 06:45:35.777037  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:35.777301  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:36.277066  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.277146  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.277501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:36.277569  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:36.776101  480112 type.go:168] "Request Body" body=""
	I1205 06:45:36.776185  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:36.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.276084  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.276163  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.276433  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:37.776112  480112 type.go:168] "Request Body" body=""
	I1205 06:45:37.776191  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:37.776531  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.276122  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.276202  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.276516  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:38.776095  480112 type.go:168] "Request Body" body=""
	I1205 06:45:38.776164  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:38.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:38.776483  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:39.276129  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.276555  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:39.776407  480112 type.go:168] "Request Body" body=""
	I1205 06:45:39.776488  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:39.776826  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.276588  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.276663  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.276937  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:40.776794  480112 type.go:168] "Request Body" body=""
	I1205 06:45:40.776875  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:40.777212  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:40.777264  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:41.277035  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.277114  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.277443  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:41.776107  480112 type.go:168] "Request Body" body=""
	I1205 06:45:41.776176  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:41.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.276209  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.276287  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.276666  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:42.776161  480112 type.go:168] "Request Body" body=""
	I1205 06:45:42.776233  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:42.776562  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:43.276203  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.276275  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.276590  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:43.276647  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:43.776159  480112 type.go:168] "Request Body" body=""
	I1205 06:45:43.776232  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:43.776550  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.276530  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:44.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:45:44.776148  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:44.776400  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:45.276220  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.276317  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.276708  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:45.276763  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:45.776444  480112 type.go:168] "Request Body" body=""
	I1205 06:45:45.776519  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:45.776847  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.276602  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.276676  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.276921  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:46.776673  480112 type.go:168] "Request Body" body=""
	I1205 06:45:46.776790  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:46.777114  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:47.276802  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.276889  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.277247  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:47.277302  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:47.776975  480112 type.go:168] "Request Body" body=""
	I1205 06:45:47.777051  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:47.777338  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.276041  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.276118  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.276410  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:48.776030  480112 type.go:168] "Request Body" body=""
	I1205 06:45:48.776109  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:48.776395  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.276033  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.276104  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.276393  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:49.776143  480112 type.go:168] "Request Body" body=""
	I1205 06:45:49.776220  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:49.776539  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:49.776593  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:50.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.276494  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:50.776135  480112 type.go:168] "Request Body" body=""
	I1205 06:45:50.776205  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:50.776461  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.276134  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.276207  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.276535  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:51.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:45:51.776209  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:51.776547  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:52.276178  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.276243  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.276510  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:52.276549  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:52.776182  480112 type.go:168] "Request Body" body=""
	I1205 06:45:52.776256  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:52.776572  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.276130  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.276203  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.276538  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:53.776131  480112 type.go:168] "Request Body" body=""
	I1205 06:45:53.776199  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:53.776498  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:54.276193  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.276278  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.276592  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:54.276649  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:54.776395  480112 type.go:168] "Request Body" body=""
	I1205 06:45:54.776470  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:54.776794  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.276068  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.276132  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.276389  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:55.776137  480112 type.go:168] "Request Body" body=""
	I1205 06:45:55.776213  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:55.776545  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:56.276234  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.276311  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.276656  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:56.276710  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:56.776199  480112 type.go:168] "Request Body" body=""
	I1205 06:45:56.776281  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:56.776602  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.276123  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.276201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.276534  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:57.776290  480112 type.go:168] "Request Body" body=""
	I1205 06:45:57.776381  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:57.776755  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.276054  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.276133  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.276434  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:58.776103  480112 type.go:168] "Request Body" body=""
	I1205 06:45:58.776180  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:58.776504  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:45:58.776554  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:45:59.276223  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.276295  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.276593  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:45:59.776062  480112 type.go:168] "Request Body" body=""
	I1205 06:45:59.776141  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:45:59.776662  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.276689  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.276784  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.277182  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:00.776974  480112 type.go:168] "Request Body" body=""
	I1205 06:46:00.777053  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:00.777397  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:00.777455  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:01.276111  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.276181  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.276450  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:01.776126  480112 type.go:168] "Request Body" body=""
	I1205 06:46:01.776201  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:01.776502  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.276247  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.276322  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.276641  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:02.776078  480112 type.go:168] "Request Body" body=""
	I1205 06:46:02.776151  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:02.776436  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:03.276061  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.276138  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.276524  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:03.276573  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:03.776138  480112 type.go:168] "Request Body" body=""
	I1205 06:46:03.776216  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:03.776529  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.276173  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.276265  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.276523  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:04.776433  480112 type.go:168] "Request Body" body=""
	I1205 06:46:04.776505  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:04.776849  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:05.276666  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.276770  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.277090  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1205 06:46:05.277147  480112 node_ready.go:55] error getting node "functional-787602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-787602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1205 06:46:05.776139  480112 type.go:168] "Request Body" body=""
	I1205 06:46:05.776219  480112 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-787602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1205 06:46:05.776501  480112 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1205 06:46:06.276074  480112 node_ready.go:38] duration metric: took 6m0.000169865s for node "functional-787602" to be "Ready" ...
	I1205 06:46:06.279558  480112 out.go:203] 
	W1205 06:46:06.282535  480112 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1205 06:46:06.282557  480112 out.go:285] * 
	W1205 06:46:06.284719  480112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:46:06.287525  480112 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.581412173Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9aa7b449-1ae2-4384-87e4-65b9a24fbea7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.605978089Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b3194c87-82bc-4598-80c5-431dd94b79dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.606137656Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b3194c87-82bc-4598-80c5-431dd94b79dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:14 functional-787602 crio[6033]: time="2025-12-05T06:46:14.606189743Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b3194c87-82bc-4598-80c5-431dd94b79dc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.704559867Z" level=info msg="Checking image status: minikube-local-cache-test:functional-787602" id=22be8394-443f-4475-bd1a-0099e58de926 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.730063797Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-787602" id=5e4e0158-f569-409e-beb1-58fd4e4941c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.730220697Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-787602 not found" id=5e4e0158-f569-409e-beb1-58fd4e4941c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.730273711Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-787602 found" id=5e4e0158-f569-409e-beb1-58fd4e4941c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.756848835Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-787602" id=075858db-0799-409d-b815-fb45ccb8b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.757020021Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-787602 not found" id=075858db-0799-409d-b815-fb45ccb8b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:15 functional-787602 crio[6033]: time="2025-12-05T06:46:15.75707301Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-787602 found" id=075858db-0799-409d-b815-fb45ccb8b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.562814589Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f3477146-fa11-4450-a819-6bd0968712a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.889931896Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=076f8c25-05eb-4184-9765-300a3483fdb8 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.89009445Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=076f8c25-05eb-4184-9765-300a3483fdb8 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:16 functional-787602 crio[6033]: time="2025-12-05T06:46:16.890143222Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=076f8c25-05eb-4184-9765-300a3483fdb8 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.43805252Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=29e1462a-2dc0-4511-b385-36e4e9ef2888 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.43817646Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=29e1462a-2dc0-4511-b385-36e4e9ef2888 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.438211234Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=29e1462a-2dc0-4511-b385-36e4e9ef2888 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.484857632Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1c72a6ec-6385-4908-883a-a6e1d899a39a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.484980916Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1c72a6ec-6385-4908-883a-a6e1d899a39a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.485024699Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1c72a6ec-6385-4908-883a-a6e1d899a39a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.513359165Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7509d3f5-17cd-4bf2-a0d3-20bc09ced34c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.513489103Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7509d3f5-17cd-4bf2-a0d3-20bc09ced34c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:17 functional-787602 crio[6033]: time="2025-12-05T06:46:17.513523286Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7509d3f5-17cd-4bf2-a0d3-20bc09ced34c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:46:18 functional-787602 crio[6033]: time="2025-12-05T06:46:18.056741075Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=91ed2cc6-448f-4a39-a38a-99accbdaa389 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:46:21.929753   10175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:21.931244   10175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:21.931590   10175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:21.933069   10175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:46:21.933383   10175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:46:21 up  3:28,  0 user,  load average: 0.53, 0.27, 0.49
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:46:19 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:20 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1149.
	Dec 05 06:46:20 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:20 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:20 functional-787602 kubelet[10051]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:20 functional-787602 kubelet[10051]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:20 functional-787602 kubelet[10051]: E1205 06:46:20.084897   10051 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:20 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:20 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:20 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1150.
	Dec 05 06:46:20 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:20 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:20 functional-787602 kubelet[10072]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:20 functional-787602 kubelet[10072]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:20 functional-787602 kubelet[10072]: E1205 06:46:20.831082   10072 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:20 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:20 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:46:21 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1151.
	Dec 05 06:46:21 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:21 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:46:21 functional-787602 kubelet[10094]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:21 functional-787602 kubelet[10094]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:46:21 functional-787602 kubelet[10094]: E1205 06:46:21.579547   10094 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:46:21 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:46:21 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (347.961496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:48:45.394515  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:50:39.247081  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:51:48.464493  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:52:02.316958  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:53:45.394586  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:55:39.250236  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.162775634s)

                                                
                                                
-- stdout --
	* [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128632s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-787602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.164019235s for "functional-787602" cluster.
I1205 06:58:37.167669  444147 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (314.287254ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-252233 ssh pgrep buildkitd                                                                                                             │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ image   │ functional-252233 image ls --format yaml --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr                                            │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format json --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format table --alsologtostderr                                                                                       │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls                                                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ delete  │ -p functional-252233                                                                                                                              │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ start   │ -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ start   │ -p functional-787602 --alsologtostderr -v=8                                                                                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:39 UTC │                     │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:latest                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add minikube-local-cache-test:functional-787602                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache delete minikube-local-cache-test:functional-787602                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl images                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ cache   │ functional-787602 cache reload                                                                                                                    │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ kubectl │ functional-787602 kubectl -- --context functional-787602 get pods                                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ start   │ -p functional-787602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:46:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:46:23.060483  485986 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:46:23.060587  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060592  485986 out.go:374] Setting ErrFile to fd 2...
	I1205 06:46:23.060596  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060943  485986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:46:23.061383  485986 out.go:368] Setting JSON to false
	I1205 06:46:23.062251  485986 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12510,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:46:23.062334  485986 start.go:143] virtualization:  
	I1205 06:46:23.066082  485986 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:46:23.069981  485986 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:46:23.070104  485986 notify.go:221] Checking for updates...
	I1205 06:46:23.076003  485986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:46:23.078837  485986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:46:23.081722  485986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:46:23.084680  485986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:46:23.087568  485986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:46:23.090922  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:23.091022  485986 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:46:23.121487  485986 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:46:23.121590  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.189036  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.180099644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.189132  485986 docker.go:319] overlay module found
	I1205 06:46:23.192176  485986 out.go:179] * Using the docker driver based on existing profile
	I1205 06:46:23.195026  485986 start.go:309] selected driver: docker
	I1205 06:46:23.195034  485986 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.195143  485986 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:46:23.195245  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.259735  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.25087077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.260168  485986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:46:23.260193  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:23.260245  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:23.260292  485986 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.263405  485986 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:46:23.266278  485986 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:46:23.269305  485986 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:46:23.272128  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:23.272198  485986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:46:23.291679  485986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:46:23.291691  485986 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:46:23.331907  485986 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:46:24.681828  485986 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:46:24.681963  485986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:46:24.682057  485986 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682183  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:46:24.682191  485986 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.111µs
	I1205 06:46:24.682203  485986 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:46:24.682212  485986 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682238  485986 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:46:24.682242  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:46:24.682246  485986 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.143µs
	I1205 06:46:24.682251  485986 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682266  485986 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682260  485986 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682305  485986 start.go:364] duration metric: took 27.331µs to acquireMachinesLock for "functional-787602"
	I1205 06:46:24.682303  485986 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682317  485986 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:46:24.682322  485986 fix.go:54] fixHost starting: 
	I1205 06:46:24.682343  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:46:24.682348  485986 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.295µs
	I1205 06:46:24.682354  485986 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:46:24.682364  485986 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682416  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:46:24.682421  485986 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.691µs
	I1205 06:46:24.682428  485986 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682437  485986 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682462  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:46:24.682466  485986 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.31µs
	I1205 06:46:24.682471  485986 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682480  485986 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682514  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:46:24.682518  485986 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.27µs
	I1205 06:46:24.682523  485986 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:46:24.682531  485986 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682555  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:46:24.682558  485986 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.283µs
	I1205 06:46:24.682568  485986 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:46:24.682583  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:46:24.682587  485986 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 328.529µs
	I1205 06:46:24.682591  485986 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682599  485986 cache.go:87] Successfully saved all images to host disk.
	I1205 06:46:24.682614  485986 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:46:24.699421  485986 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:46:24.699440  485986 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:46:24.704636  485986 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:46:24.704669  485986 machine.go:94] provisionDockerMachine start ...
	I1205 06:46:24.704752  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.722297  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.722651  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.722658  485986 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:46:24.869775  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:24.869801  485986 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:46:24.869864  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.887234  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.887558  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.887567  485986 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:46:25.047727  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:25.047810  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.066336  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.066675  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.066689  485986 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:46:25.218719  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:46:25.218735  485986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:46:25.218754  485986 ubuntu.go:190] setting up certificates
	I1205 06:46:25.218762  485986 provision.go:84] configureAuth start
	I1205 06:46:25.218833  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:25.236317  485986 provision.go:143] copyHostCerts
	I1205 06:46:25.236383  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:46:25.236396  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:46:25.236468  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:46:25.236562  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:46:25.236565  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:46:25.236589  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:46:25.236636  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:46:25.236640  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:46:25.236661  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:46:25.236704  485986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:46:25.509369  485986 provision.go:177] copyRemoteCerts
	I1205 06:46:25.509433  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:46:25.509483  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.526532  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:25.630074  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:46:25.647569  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:46:25.665563  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:46:25.683160  485986 provision.go:87] duration metric: took 464.374115ms to configureAuth
	I1205 06:46:25.683179  485986 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:46:25.683380  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:25.683487  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.701466  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.701775  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.701787  485986 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:46:26.045147  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:46:26.045161  485986 machine.go:97] duration metric: took 1.340485738s to provisionDockerMachine
	I1205 06:46:26.045171  485986 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:46:26.045182  485986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:46:26.045240  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:46:26.045301  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.071462  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.178226  485986 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:46:26.181599  485986 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:46:26.181617  485986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:46:26.181627  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:46:26.181684  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:46:26.181759  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:46:26.181833  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:46:26.181875  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:46:26.189500  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:26.206597  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:46:26.223486  485986 start.go:296] duration metric: took 178.3022ms for postStartSetup
	I1205 06:46:26.223577  485986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:46:26.223614  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.239842  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.339498  485986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:46:26.344313  485986 fix.go:56] duration metric: took 1.66198384s for fixHost
	I1205 06:46:26.344329  485986 start.go:83] releasing machines lock for "functional-787602", held for 1.662017843s
	I1205 06:46:26.344396  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:26.361695  485986 ssh_runner.go:195] Run: cat /version.json
	I1205 06:46:26.361744  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.361773  485986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:46:26.361823  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.380556  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.389997  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.566296  485986 ssh_runner.go:195] Run: systemctl --version
	I1205 06:46:26.572676  485986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:46:26.609041  485986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:46:26.613450  485986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:46:26.613514  485986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:46:26.621451  485986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:46:26.621466  485986 start.go:496] detecting cgroup driver to use...
	I1205 06:46:26.621496  485986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:46:26.621543  485986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:46:26.637300  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:46:26.650753  485986 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:46:26.650821  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:46:26.666902  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:46:26.680209  485986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:46:26.795240  485986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:46:26.925661  485986 docker.go:234] disabling docker service ...
	I1205 06:46:26.925721  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:46:26.941529  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:46:26.954708  485986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:46:27.063545  485986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:46:27.175808  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:46:27.188517  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:46:27.203590  485986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:46:27.203644  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.212003  485986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:46:27.212066  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.220691  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.229907  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.238922  485986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:46:27.247339  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.256340  485986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.264720  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.273692  485986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:46:27.281324  485986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:46:27.288509  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.394627  485986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:46:27.581943  485986 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:46:27.582023  485986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:46:27.586836  485986 start.go:564] Will wait 60s for crictl version
	I1205 06:46:27.586892  485986 ssh_runner.go:195] Run: which crictl
	I1205 06:46:27.591027  485986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:46:27.618052  485986 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:46:27.618154  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.654922  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.689535  485986 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:46:27.692450  485986 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:46:27.709456  485986 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:46:27.716890  485986 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:46:27.719774  485986 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:46:27.719904  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:27.719957  485986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:46:27.756745  485986 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:46:27.756757  485986 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:46:27.756762  485986 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:46:27.756860  485986 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:46:27.756933  485986 ssh_runner.go:195] Run: crio config
	I1205 06:46:27.826615  485986 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:46:27.826635  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:27.826644  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:27.826657  485986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:46:27.826679  485986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:46:27.826795  485986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:46:27.826871  485986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:46:27.834649  485986 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:46:27.834712  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:46:27.842099  485986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:46:27.855421  485986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:46:27.868701  485986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1205 06:46:27.882058  485986 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:46:27.885936  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.995572  485986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:46:28.275034  485986 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:46:28.275045  485986 certs.go:195] generating shared ca certs ...
	I1205 06:46:28.275061  485986 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:46:28.275249  485986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:46:28.275292  485986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:46:28.275298  485986 certs.go:257] generating profile certs ...
	I1205 06:46:28.275410  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:46:28.275475  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:46:28.275515  485986 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:46:28.275644  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:46:28.275677  485986 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:46:28.275685  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:46:28.275720  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:46:28.275747  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:46:28.275784  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:46:28.275832  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:28.276503  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:46:28.298544  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:46:28.319289  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:46:28.339576  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:46:28.358300  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:46:28.376540  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:46:28.394872  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:46:28.412281  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:46:28.429993  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:46:28.447492  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:46:28.464800  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:46:28.482269  485986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:46:28.494984  485986 ssh_runner.go:195] Run: openssl version
	I1205 06:46:28.501339  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.508762  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:46:28.516382  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520092  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520163  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.563665  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:46:28.571080  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.578338  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:46:28.585799  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589656  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589716  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.631223  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:46:28.638732  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.646106  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:46:28.653539  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657103  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657161  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.698123  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:46:28.706515  485986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:46:28.710605  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:46:28.754183  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:46:28.798105  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:46:28.841637  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:46:28.883652  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:46:28.926486  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:46:28.968827  485986 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:28.968900  485986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:46:28.968973  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:28.995506  485986 cri.go:89] found id: ""
	I1205 06:46:28.995567  485986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:46:29.004262  485986 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:46:29.004281  485986 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:46:29.004345  485986 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:46:29.012409  485986 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.012971  485986 kubeconfig.go:125] found "functional-787602" server: "https://192.168.49.2:8441"
	I1205 06:46:29.014556  485986 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:46:29.022548  485986 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:31:50.409182079 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:46:27.876278809 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:46:29.022570  485986 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:46:29.022584  485986 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 06:46:29.022652  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:29.056958  485986 cri.go:89] found id: ""
	I1205 06:46:29.057019  485986 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:46:29.073934  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:46:29.081656  485986 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5623 Dec  5 06:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  5 06:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  5 06:35 /etc/kubernetes/scheduler.conf
	
	I1205 06:46:29.081722  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:46:29.089572  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:46:29.097486  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.097543  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:46:29.105088  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.112583  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.112639  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.120188  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:46:29.127909  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.127966  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:46:29.135508  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:46:29.143544  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:29.190973  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.485506  485986 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.294504309s)
	I1205 06:46:30.485577  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.689694  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.752398  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.798299  485986 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:46:30.798367  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.299303  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.799420  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.299360  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.798577  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.298564  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.799310  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.298783  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.799510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.299369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.799119  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.298663  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.798517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.299207  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.298684  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.798475  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.299188  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.799197  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.299101  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.798572  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.298546  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.298563  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.799429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.299246  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.298849  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.799336  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.298524  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.798566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.298926  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.298502  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.799392  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.298514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.299002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.798510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.298587  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.798531  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.298834  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.798937  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.298568  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.798738  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.298745  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.799302  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.298517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.799058  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.299228  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.798518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.298540  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.799439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.298489  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.798827  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.298721  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.799210  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.298539  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.798525  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.298844  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.799320  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.298437  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.799300  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.299120  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.799319  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.298499  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.799357  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.298718  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.799264  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.299497  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.799177  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.298596  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.798469  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.298441  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.798552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.299123  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.798514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.299549  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.799361  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.798490  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.299082  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.798506  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.298576  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.799316  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.298516  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.798581  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.298604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.799331  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.298518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.799198  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.298513  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.799043  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.298601  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.798562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.298562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.798978  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.298537  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.798570  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.298807  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.799307  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.298910  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.798961  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.299359  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.799509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.299086  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.798511  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.298495  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.799378  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.799258  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.298589  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.799234  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.299117  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.798575  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.299185  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.799188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:30.799265  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:30.824550  485986 cri.go:89] found id: ""
	I1205 06:47:30.824564  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.824571  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:30.824577  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:30.824640  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:30.851389  485986 cri.go:89] found id: ""
	I1205 06:47:30.851404  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.851412  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:30.851416  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:30.851473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:30.877392  485986 cri.go:89] found id: ""
	I1205 06:47:30.877406  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.877421  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:30.877425  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:30.877481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:30.902294  485986 cri.go:89] found id: ""
	I1205 06:47:30.902308  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.902315  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:30.902321  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:30.902431  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:30.938796  485986 cri.go:89] found id: ""
	I1205 06:47:30.938810  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.938818  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:30.938823  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:30.938888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:30.965100  485986 cri.go:89] found id: ""
	I1205 06:47:30.965114  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.965121  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:30.965127  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:30.965183  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:30.992646  485986 cri.go:89] found id: ""
	I1205 06:47:30.992661  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.992668  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:30.992676  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:30.992686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:31.063641  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:31.063661  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:31.081045  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:31.081060  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:31.156684  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:31.156698  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:31.156710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:31.237470  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:31.237495  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:33.770808  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:33.780812  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:33.780872  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:33.805688  485986 cri.go:89] found id: ""
	I1205 06:47:33.805701  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.805714  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:33.805719  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:33.805779  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:33.832478  485986 cri.go:89] found id: ""
	I1205 06:47:33.832492  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.832499  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:33.832504  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:33.832560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:33.857669  485986 cri.go:89] found id: ""
	I1205 06:47:33.857683  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.857690  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:33.857695  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:33.857750  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:33.883403  485986 cri.go:89] found id: ""
	I1205 06:47:33.883417  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.883426  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:33.883431  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:33.883490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:33.914197  485986 cri.go:89] found id: ""
	I1205 06:47:33.914212  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.914219  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:33.914224  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:33.914295  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:33.944924  485986 cri.go:89] found id: ""
	I1205 06:47:33.944938  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.944945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:33.944950  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:33.945007  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:33.973129  485986 cri.go:89] found id: ""
	I1205 06:47:33.973143  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.973151  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:33.973158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:33.973169  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:34.044761  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:34.044781  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:34.061807  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:34.061823  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:34.130826  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:34.130840  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:34.130851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:34.209603  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:34.209627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:36.743254  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:36.753733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:36.753810  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:36.779655  485986 cri.go:89] found id: ""
	I1205 06:47:36.779669  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.779676  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:36.779681  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:36.779738  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:36.805062  485986 cri.go:89] found id: ""
	I1205 06:47:36.805076  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.805083  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:36.805089  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:36.805152  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:36.830864  485986 cri.go:89] found id: ""
	I1205 06:47:36.830878  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.830886  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:36.830891  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:36.830961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:36.855729  485986 cri.go:89] found id: ""
	I1205 06:47:36.855749  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.855757  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:36.855762  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:36.855819  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:36.881068  485986 cri.go:89] found id: ""
	I1205 06:47:36.881082  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.881089  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:36.881094  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:36.881157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:36.909354  485986 cri.go:89] found id: ""
	I1205 06:47:36.909367  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.909374  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:36.909380  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:36.909450  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:36.939352  485986 cri.go:89] found id: ""
	I1205 06:47:36.939375  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.939388  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:36.939396  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:36.939407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:36.954937  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:36.954953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:37.027384  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:37.027396  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:37.027407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:37.108980  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:37.109004  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:37.137603  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:37.137620  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:39.704971  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:39.715073  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:39.715153  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:39.742797  485986 cri.go:89] found id: ""
	I1205 06:47:39.742811  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.742818  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:39.742823  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:39.742882  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:39.767795  485986 cri.go:89] found id: ""
	I1205 06:47:39.767809  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.767816  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:39.767821  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:39.767888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:39.793002  485986 cri.go:89] found id: ""
	I1205 06:47:39.793016  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.793023  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:39.793028  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:39.793108  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:39.819015  485986 cri.go:89] found id: ""
	I1205 06:47:39.819029  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.819036  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:39.819042  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:39.819098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:39.844387  485986 cri.go:89] found id: ""
	I1205 06:47:39.844401  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.844408  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:39.844413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:39.844487  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:39.871624  485986 cri.go:89] found id: ""
	I1205 06:47:39.871638  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.871644  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:39.871650  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:39.871721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:39.897731  485986 cri.go:89] found id: ""
	I1205 06:47:39.897746  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.897754  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:39.897761  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:39.897771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:39.962937  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:39.962949  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:39.962960  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:40.058236  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:40.058256  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:40.094003  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:40.094022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:40.167448  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:40.167468  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.685167  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:42.695150  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:42.695206  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:42.719880  485986 cri.go:89] found id: ""
	I1205 06:47:42.719893  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.719901  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:42.719906  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:42.719965  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:42.748922  485986 cri.go:89] found id: ""
	I1205 06:47:42.748936  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.748943  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:42.748949  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:42.749005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:42.778525  485986 cri.go:89] found id: ""
	I1205 06:47:42.778539  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.778546  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:42.778551  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:42.778610  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:42.804447  485986 cri.go:89] found id: ""
	I1205 06:47:42.804461  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.804468  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:42.804473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:42.804530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:42.829834  485986 cri.go:89] found id: ""
	I1205 06:47:42.829848  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.829855  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:42.829861  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:42.829917  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:42.861917  485986 cri.go:89] found id: ""
	I1205 06:47:42.861937  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.861945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:42.861951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:42.862011  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:42.889024  485986 cri.go:89] found id: ""
	I1205 06:47:42.889047  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.889055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:42.889063  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:42.889073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:42.954442  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:42.954462  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.969793  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:42.969810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:43.044093  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:43.044113  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:43.044124  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:43.137811  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:43.137841  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.667791  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:45.677638  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:45.677697  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:45.702199  485986 cri.go:89] found id: ""
	I1205 06:47:45.702213  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.702220  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:45.702226  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:45.702284  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:45.726622  485986 cri.go:89] found id: ""
	I1205 06:47:45.726635  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.726642  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:45.726647  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:45.726703  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:45.752464  485986 cri.go:89] found id: ""
	I1205 06:47:45.752477  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.752484  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:45.752489  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:45.752551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:45.777756  485986 cri.go:89] found id: ""
	I1205 06:47:45.777770  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.777777  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:45.777783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:45.777838  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:45.803428  485986 cri.go:89] found id: ""
	I1205 06:47:45.803443  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.803459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:45.803464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:45.803524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:45.829175  485986 cri.go:89] found id: ""
	I1205 06:47:45.829189  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.829196  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:45.829201  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:45.829260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:45.855195  485986 cri.go:89] found id: ""
	I1205 06:47:45.855210  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.855217  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:45.855224  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:45.855235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.887261  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:45.887277  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:45.952635  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:45.952655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:45.968248  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:45.968265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:46.039946  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:46.039964  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:46.039975  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:48.631039  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:48.641171  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:48.641231  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:48.666360  485986 cri.go:89] found id: ""
	I1205 06:47:48.666402  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.666409  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:48.666417  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:48.666473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:48.694222  485986 cri.go:89] found id: ""
	I1205 06:47:48.694237  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.694243  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:48.694249  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:48.694304  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:48.718984  485986 cri.go:89] found id: ""
	I1205 06:47:48.718998  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.719005  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:48.719010  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:48.719067  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:48.744169  485986 cri.go:89] found id: ""
	I1205 06:47:48.744183  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.744190  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:48.744195  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:48.744253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:48.769240  485986 cri.go:89] found id: ""
	I1205 06:47:48.769263  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.769270  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:48.769275  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:48.769341  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:48.798956  485986 cri.go:89] found id: ""
	I1205 06:47:48.798971  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.798978  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:48.798983  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:48.799044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:48.826195  485986 cri.go:89] found id: ""
	I1205 06:47:48.826209  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.826216  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:48.826223  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:48.826233  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:48.892751  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:48.892771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:48.908154  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:48.908171  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:48.975550  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:48.975561  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:48.975572  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:49.057631  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:49.057651  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.594813  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:51.606364  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:51.606436  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:51.636377  485986 cri.go:89] found id: ""
	I1205 06:47:51.636391  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.636398  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:51.636403  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:51.636464  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:51.662318  485986 cri.go:89] found id: ""
	I1205 06:47:51.662332  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.662338  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:51.662349  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:51.662430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:51.688886  485986 cri.go:89] found id: ""
	I1205 06:47:51.688900  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.688907  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:51.688911  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:51.688969  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:51.717982  485986 cri.go:89] found id: ""
	I1205 06:47:51.717996  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.718003  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:51.718008  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:51.718066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:51.744748  485986 cri.go:89] found id: ""
	I1205 06:47:51.744762  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.744769  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:51.744783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:51.744840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:51.769889  485986 cri.go:89] found id: ""
	I1205 06:47:51.769903  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.769909  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:51.769915  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:51.769970  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:51.797012  485986 cri.go:89] found id: ""
	I1205 06:47:51.797026  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.797033  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:51.797040  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:51.797050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:51.871624  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:51.871643  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.901592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:51.901609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:51.968311  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:51.968333  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:51.983733  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:51.983748  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:52.057625  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.557903  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:54.568103  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:54.568164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:54.597086  485986 cri.go:89] found id: ""
	I1205 06:47:54.597100  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.597107  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:54.597112  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:54.597168  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:54.622728  485986 cri.go:89] found id: ""
	I1205 06:47:54.622743  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.622750  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:54.622756  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:54.622812  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:54.646642  485986 cri.go:89] found id: ""
	I1205 06:47:54.646656  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.646663  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:54.646668  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:54.646723  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:54.671271  485986 cri.go:89] found id: ""
	I1205 06:47:54.671286  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.671293  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:54.671299  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:54.671355  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:54.696124  485986 cri.go:89] found id: ""
	I1205 06:47:54.696138  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.696150  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:54.696155  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:54.696210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:54.720362  485986 cri.go:89] found id: ""
	I1205 06:47:54.720375  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.720383  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:54.720388  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:54.720442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:54.754080  485986 cri.go:89] found id: ""
	I1205 06:47:54.754094  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.754101  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:54.754108  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:54.754121  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:54.820260  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:54.820281  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:54.836201  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:54.836217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:54.909051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.909069  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:54.909080  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:54.984892  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:54.984912  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:57.516912  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:57.527633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:57.527698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:57.553823  485986 cri.go:89] found id: ""
	I1205 06:47:57.553837  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.553844  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:57.553851  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:57.553924  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:57.581054  485986 cri.go:89] found id: ""
	I1205 06:47:57.581068  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.581075  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:57.581080  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:57.581139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:57.606438  485986 cri.go:89] found id: ""
	I1205 06:47:57.606452  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.606460  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:57.606465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:57.606522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:57.632199  485986 cri.go:89] found id: ""
	I1205 06:47:57.632214  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.632220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:57.632226  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:57.632285  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:57.661439  485986 cri.go:89] found id: ""
	I1205 06:47:57.661454  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.661460  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:57.661465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:57.661521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:57.690916  485986 cri.go:89] found id: ""
	I1205 06:47:57.690930  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.690937  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:57.690943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:57.691003  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:57.716612  485986 cri.go:89] found id: ""
	I1205 06:47:57.716625  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.716632  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:57.716640  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:57.716650  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:57.787213  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:57.787235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:57.802362  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:57.802400  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:57.864350  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:57.864360  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:57.864370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:57.941328  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:57.941349  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:00.470137  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:00.483635  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:00.483706  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:00.512315  485986 cri.go:89] found id: ""
	I1205 06:48:00.512330  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.512338  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:00.512345  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:00.512409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:00.546442  485986 cri.go:89] found id: ""
	I1205 06:48:00.546457  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.546464  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:00.546469  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:00.546530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:00.573096  485986 cri.go:89] found id: ""
	I1205 06:48:00.573110  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.573123  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:00.573128  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:00.573187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:00.603254  485986 cri.go:89] found id: ""
	I1205 06:48:00.603268  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.603275  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:00.603280  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:00.603337  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:00.633558  485986 cri.go:89] found id: ""
	I1205 06:48:00.633572  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.633579  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:00.633586  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:00.633651  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:00.660790  485986 cri.go:89] found id: ""
	I1205 06:48:00.660804  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.660810  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:00.660816  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:00.660874  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:00.688773  485986 cri.go:89] found id: ""
	I1205 06:48:00.688786  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.688793  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:00.688800  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:00.688811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:00.753427  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:00.753450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:00.768529  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:00.768545  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:00.832028  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:00.832038  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:00.832048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:00.909664  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:00.909686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:03.440768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:03.451151  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:03.451210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:03.475560  485986 cri.go:89] found id: ""
	I1205 06:48:03.475574  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.475580  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:03.475586  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:03.475657  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:03.500266  485986 cri.go:89] found id: ""
	I1205 06:48:03.500280  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.500286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:03.500291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:03.500350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:03.528908  485986 cri.go:89] found id: ""
	I1205 06:48:03.528921  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.528928  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:03.528933  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:03.528993  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:03.556882  485986 cri.go:89] found id: ""
	I1205 06:48:03.556896  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.556903  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:03.556908  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:03.556963  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:03.582231  485986 cri.go:89] found id: ""
	I1205 06:48:03.582244  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.582252  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:03.582257  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:03.582315  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:03.611645  485986 cri.go:89] found id: ""
	I1205 06:48:03.611658  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.611665  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:03.611670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:03.611732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:03.637034  485986 cri.go:89] found id: ""
	I1205 06:48:03.637048  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.637055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:03.637062  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:03.637072  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:03.703283  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:03.703305  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:03.718166  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:03.718182  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:03.784612  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:03.784623  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:03.784645  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:03.865840  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:03.865871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.395611  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:06.406190  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:06.406253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:06.431958  485986 cri.go:89] found id: ""
	I1205 06:48:06.431972  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.431979  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:06.431984  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:06.432047  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:06.457302  485986 cri.go:89] found id: ""
	I1205 06:48:06.457317  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.457324  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:06.457329  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:06.457391  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:06.482778  485986 cri.go:89] found id: ""
	I1205 06:48:06.482793  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.482799  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:06.482805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:06.482860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:06.508293  485986 cri.go:89] found id: ""
	I1205 06:48:06.508307  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.508314  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:06.508319  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:06.508457  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:06.537089  485986 cri.go:89] found id: ""
	I1205 06:48:06.537103  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.537110  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:06.537115  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:06.537175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:06.564731  485986 cri.go:89] found id: ""
	I1205 06:48:06.564745  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.564752  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:06.564759  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:06.564815  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:06.590872  485986 cri.go:89] found id: ""
	I1205 06:48:06.590887  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.590895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:06.590903  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:06.590914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:06.658481  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:06.658495  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:06.658505  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:06.733300  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:06.733322  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.768591  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:06.768606  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:06.834509  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:06.834529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.350677  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:09.360723  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:09.360783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:09.388219  485986 cri.go:89] found id: ""
	I1205 06:48:09.388232  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.388239  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:09.388244  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:09.388306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:09.416992  485986 cri.go:89] found id: ""
	I1205 06:48:09.417007  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.417013  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:09.417019  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:09.417076  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:09.446304  485986 cri.go:89] found id: ""
	I1205 06:48:09.446318  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.446325  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:09.446330  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:09.446409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:09.472368  485986 cri.go:89] found id: ""
	I1205 06:48:09.472383  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.472390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:09.472395  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:09.472474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:09.497702  485986 cri.go:89] found id: ""
	I1205 06:48:09.497716  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.497722  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:09.497727  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:09.497783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:09.525679  485986 cri.go:89] found id: ""
	I1205 06:48:09.525693  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.525700  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:09.525706  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:09.525765  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:09.552628  485986 cri.go:89] found id: ""
	I1205 06:48:09.552643  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.552650  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:09.552657  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:09.552667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:09.618085  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:09.618105  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.633067  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:09.633084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:09.696615  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:09.696626  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:09.696637  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:09.772055  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:09.772074  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.303940  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:12.314229  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:12.314298  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:12.348459  485986 cri.go:89] found id: ""
	I1205 06:48:12.348473  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.348480  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:12.348485  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:12.348543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:12.373284  485986 cri.go:89] found id: ""
	I1205 06:48:12.373299  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.373306  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:12.373311  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:12.373375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:12.398539  485986 cri.go:89] found id: ""
	I1205 06:48:12.398559  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.398566  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:12.398571  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:12.398635  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:12.423138  485986 cri.go:89] found id: ""
	I1205 06:48:12.423151  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.423158  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:12.423163  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:12.423223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:12.447667  485986 cri.go:89] found id: ""
	I1205 06:48:12.447680  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.447688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:12.447692  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:12.447751  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:12.472343  485986 cri.go:89] found id: ""
	I1205 06:48:12.472357  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.472364  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:12.472369  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:12.472425  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:12.497076  485986 cri.go:89] found id: ""
	I1205 06:48:12.497089  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.497096  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:12.497102  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:12.497112  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:12.574451  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:12.574470  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.610910  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:12.610926  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:12.678117  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:12.678135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:12.692476  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:12.692492  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:12.758359  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.258636  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:15.270043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:15.270103  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:15.304750  485986 cri.go:89] found id: ""
	I1205 06:48:15.304764  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.304771  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:15.304776  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:15.304832  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:15.344151  485986 cri.go:89] found id: ""
	I1205 06:48:15.344165  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.344172  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:15.344182  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:15.344249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:15.371527  485986 cri.go:89] found id: ""
	I1205 06:48:15.371541  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.371548  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:15.371553  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:15.371618  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:15.403495  485986 cri.go:89] found id: ""
	I1205 06:48:15.403508  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.403515  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:15.403521  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:15.403581  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:15.429409  485986 cri.go:89] found id: ""
	I1205 06:48:15.429424  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.429431  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:15.429436  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:15.429501  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:15.459234  485986 cri.go:89] found id: ""
	I1205 06:48:15.459248  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.459257  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:15.459263  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:15.459320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:15.488886  485986 cri.go:89] found id: ""
	I1205 06:48:15.488900  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.488907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:15.488915  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:15.488925  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:15.556219  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:15.556239  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:15.571562  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:15.571579  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:15.635494  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.635504  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:15.635514  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:15.717719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:15.717740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.253466  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:18.263430  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:18.263491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:18.305028  485986 cri.go:89] found id: ""
	I1205 06:48:18.305042  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.305049  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:18.305054  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:18.305111  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:18.332689  485986 cri.go:89] found id: ""
	I1205 06:48:18.332702  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.332709  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:18.332715  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:18.332770  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:18.360205  485986 cri.go:89] found id: ""
	I1205 06:48:18.360220  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.360227  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:18.360232  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:18.360291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:18.385479  485986 cri.go:89] found id: ""
	I1205 06:48:18.385493  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.385500  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:18.385505  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:18.385560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:18.413258  485986 cri.go:89] found id: ""
	I1205 06:48:18.413272  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.413279  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:18.413286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:18.413348  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:18.439018  485986 cri.go:89] found id: ""
	I1205 06:48:18.439032  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.439039  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:18.439044  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:18.439099  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:18.465311  485986 cri.go:89] found id: ""
	I1205 06:48:18.465324  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.465341  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:18.465348  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:18.465359  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:18.479885  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:18.479902  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:18.543997  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:18.544007  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:18.544018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:18.620924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:18.620948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.655034  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:18.655050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.222770  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:21.233411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:21.233478  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:21.263287  485986 cri.go:89] found id: ""
	I1205 06:48:21.263302  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.263309  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:21.263315  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:21.263379  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:21.300915  485986 cri.go:89] found id: ""
	I1205 06:48:21.300929  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.300936  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:21.300941  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:21.301005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:21.328975  485986 cri.go:89] found id: ""
	I1205 06:48:21.328989  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.328999  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:21.329004  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:21.329061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:21.358828  485986 cri.go:89] found id: ""
	I1205 06:48:21.358842  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.358849  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:21.358854  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:21.358914  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:21.384401  485986 cri.go:89] found id: ""
	I1205 06:48:21.384422  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.384429  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:21.384434  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:21.384491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:21.409705  485986 cri.go:89] found id: ""
	I1205 06:48:21.409719  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.409726  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:21.409732  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:21.409791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:21.437633  485986 cri.go:89] found id: ""
	I1205 06:48:21.437650  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.437658  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:21.437665  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:21.437675  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:21.515785  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:21.515808  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:21.549019  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:21.549035  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.620027  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:21.620048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:21.635622  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:21.635638  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:21.710252  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.210507  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:24.221002  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:24.221061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:24.246259  485986 cri.go:89] found id: ""
	I1205 06:48:24.246273  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.246280  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:24.246285  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:24.246350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:24.274723  485986 cri.go:89] found id: ""
	I1205 06:48:24.274736  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.274743  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:24.274749  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:24.274807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:24.312165  485986 cri.go:89] found id: ""
	I1205 06:48:24.312179  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.312186  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:24.312191  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:24.312248  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:24.351913  485986 cri.go:89] found id: ""
	I1205 06:48:24.351927  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.351934  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:24.351939  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:24.351995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:24.377944  485986 cri.go:89] found id: ""
	I1205 06:48:24.377958  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.377966  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:24.377971  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:24.378029  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:24.403127  485986 cri.go:89] found id: ""
	I1205 06:48:24.403142  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.403149  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:24.403154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:24.403211  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:24.428745  485986 cri.go:89] found id: ""
	I1205 06:48:24.428760  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.428777  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:24.428785  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:24.428795  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:24.495838  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:24.495860  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:24.511294  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:24.511309  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:24.577637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.577647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:24.577658  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:24.664395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:24.664422  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:27.196552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:27.206670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:27.206729  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:27.232859  485986 cri.go:89] found id: ""
	I1205 06:48:27.232873  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.232880  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:27.232885  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:27.232944  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:27.261077  485986 cri.go:89] found id: ""
	I1205 06:48:27.261091  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.261098  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:27.261104  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:27.261157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:27.299035  485986 cri.go:89] found id: ""
	I1205 06:48:27.299049  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.299056  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:27.299061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:27.299117  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:27.325080  485986 cri.go:89] found id: ""
	I1205 06:48:27.325094  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.325100  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:27.325105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:27.325165  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:27.355194  485986 cri.go:89] found id: ""
	I1205 06:48:27.355208  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.355215  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:27.355220  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:27.355281  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:27.380260  485986 cri.go:89] found id: ""
	I1205 06:48:27.380274  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.380281  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:27.380286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:27.380340  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:27.404746  485986 cri.go:89] found id: ""
	I1205 06:48:27.404760  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.404767  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:27.404774  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:27.404784  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:27.471214  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:27.471234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:27.486196  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:27.486213  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:27.549013  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:27.549023  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:27.549034  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:27.626719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:27.626740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.157779  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:30.168828  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:30.168888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:30.196472  485986 cri.go:89] found id: ""
	I1205 06:48:30.196487  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.196494  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:30.196500  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:30.196561  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:30.222435  485986 cri.go:89] found id: ""
	I1205 06:48:30.222449  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.222456  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:30.222463  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:30.222521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:30.252893  485986 cri.go:89] found id: ""
	I1205 06:48:30.252907  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.252914  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:30.252919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:30.252979  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:30.293703  485986 cri.go:89] found id: ""
	I1205 06:48:30.293717  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.293724  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:30.293729  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:30.293791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:30.323711  485986 cri.go:89] found id: ""
	I1205 06:48:30.323724  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.323731  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:30.323746  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:30.323804  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:30.355817  485986 cri.go:89] found id: ""
	I1205 06:48:30.355831  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.355838  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:30.355844  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:30.355905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:30.384820  485986 cri.go:89] found id: ""
	I1205 06:48:30.384834  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.384850  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:30.384858  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:30.384869  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:30.400554  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:30.400571  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:30.462509  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:30.462519  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:30.462529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:30.539861  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:30.539884  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.572611  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:30.572627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.142900  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:33.153456  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:33.153522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:33.178913  485986 cri.go:89] found id: ""
	I1205 06:48:33.178926  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.178933  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:33.178939  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:33.178994  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:33.204173  485986 cri.go:89] found id: ""
	I1205 06:48:33.204187  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.204195  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:33.204200  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:33.204260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:33.231661  485986 cri.go:89] found id: ""
	I1205 06:48:33.231675  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.231688  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:33.231693  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:33.231749  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:33.256100  485986 cri.go:89] found id: ""
	I1205 06:48:33.256113  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.256120  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:33.256125  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:33.256180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:33.288692  485986 cri.go:89] found id: ""
	I1205 06:48:33.288706  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.288713  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:33.288718  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:33.288778  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:33.322902  485986 cri.go:89] found id: ""
	I1205 06:48:33.322916  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.322931  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:33.322936  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:33.322995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:33.354832  485986 cri.go:89] found id: ""
	I1205 06:48:33.354846  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.354853  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:33.354861  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:33.354871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.419523  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:33.419542  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:33.436533  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:33.436549  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:33.500717  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:33.500727  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:33.500744  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:33.576166  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:33.576187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.103891  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:36.114026  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:36.114086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:36.138404  485986 cri.go:89] found id: ""
	I1205 06:48:36.138419  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.138426  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:36.138432  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:36.138490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:36.165135  485986 cri.go:89] found id: ""
	I1205 06:48:36.165149  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.165156  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:36.165161  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:36.165218  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:36.190238  485986 cri.go:89] found id: ""
	I1205 06:48:36.190252  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.190259  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:36.190264  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:36.190323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:36.216962  485986 cri.go:89] found id: ""
	I1205 06:48:36.216975  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.216982  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:36.216987  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:36.217043  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:36.241075  485986 cri.go:89] found id: ""
	I1205 06:48:36.241089  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.241096  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:36.241107  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:36.241174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:36.267257  485986 cri.go:89] found id: ""
	I1205 06:48:36.267272  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.267278  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:36.267284  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:36.267350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:36.293288  485986 cri.go:89] found id: ""
	I1205 06:48:36.293310  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.293320  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:36.293327  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:36.293338  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:36.363749  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:36.363759  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:36.363769  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:36.438180  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:36.438203  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.466903  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:36.466919  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:36.532968  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:36.532989  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.048421  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:39.059045  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:39.059109  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:39.083511  485986 cri.go:89] found id: ""
	I1205 06:48:39.083526  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.083532  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:39.083537  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:39.083599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:39.107712  485986 cri.go:89] found id: ""
	I1205 06:48:39.107725  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.107732  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:39.107736  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:39.107793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:39.132566  485986 cri.go:89] found id: ""
	I1205 06:48:39.132580  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.132588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:39.132593  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:39.132650  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:39.161417  485986 cri.go:89] found id: ""
	I1205 06:48:39.161431  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.161438  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:39.161443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:39.161511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:39.186314  485986 cri.go:89] found id: ""
	I1205 06:48:39.186328  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.186335  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:39.186340  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:39.186428  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:39.210957  485986 cri.go:89] found id: ""
	I1205 06:48:39.210971  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.210980  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:39.210986  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:39.211044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:39.236120  485986 cri.go:89] found id: ""
	I1205 06:48:39.236134  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.236141  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:39.236148  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:39.236159  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.250894  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:39.250911  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:39.334545  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:39.334556  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:39.334567  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:39.413949  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:39.413970  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:39.444354  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:39.444370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.015174  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:42.026667  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:42.026732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:42.056643  485986 cri.go:89] found id: ""
	I1205 06:48:42.056658  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.056666  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:42.056672  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:42.056732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:42.084714  485986 cri.go:89] found id: ""
	I1205 06:48:42.084731  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.084745  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:42.084750  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:42.084817  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:42.115735  485986 cri.go:89] found id: ""
	I1205 06:48:42.115750  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.115757  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:42.115763  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:42.115828  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:42.148687  485986 cri.go:89] found id: ""
	I1205 06:48:42.148703  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.148711  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:42.148717  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:42.148783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:42.183060  485986 cri.go:89] found id: ""
	I1205 06:48:42.183076  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.183084  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:42.183089  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:42.183162  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:42.216582  485986 cri.go:89] found id: ""
	I1205 06:48:42.216598  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.216606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:42.216612  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:42.216684  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:42.247171  485986 cri.go:89] found id: ""
	I1205 06:48:42.247186  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.247193  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:42.247201  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:42.247217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:42.285459  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:42.285487  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.355504  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:42.355523  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:42.370693  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:42.370709  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:42.438568  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:42.438578  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:42.438588  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.014965  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:45.054270  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:45.054339  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:45.114057  485986 cri.go:89] found id: ""
	I1205 06:48:45.114075  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.114090  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:45.114097  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:45.114172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:45.165369  485986 cri.go:89] found id: ""
	I1205 06:48:45.165394  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.165402  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:45.165408  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:45.165494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:45.212325  485986 cri.go:89] found id: ""
	I1205 06:48:45.212342  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.212349  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:45.212355  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:45.212424  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:45.254096  485986 cri.go:89] found id: ""
	I1205 06:48:45.254114  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.254127  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:45.254134  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:45.254294  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:45.305666  485986 cri.go:89] found id: ""
	I1205 06:48:45.305681  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.305688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:45.305694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:45.305753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:45.347701  485986 cri.go:89] found id: ""
	I1205 06:48:45.347715  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.347721  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:45.347726  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:45.347793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:45.373745  485986 cri.go:89] found id: ""
	I1205 06:48:45.373760  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.373775  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:45.373782  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:45.373793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:45.439756  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:45.439776  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:45.454781  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:45.454797  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:45.521815  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:45.521826  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:45.521838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.602427  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:45.602455  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.134541  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:48.144703  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:48.144768  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:48.169929  485986 cri.go:89] found id: ""
	I1205 06:48:48.169942  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.169949  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:48.169954  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:48.170014  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:48.194815  485986 cri.go:89] found id: ""
	I1205 06:48:48.194828  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.194835  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:48.194840  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:48.194898  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:48.220017  485986 cri.go:89] found id: ""
	I1205 06:48:48.220031  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.220038  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:48.220043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:48.220101  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:48.249449  485986 cri.go:89] found id: ""
	I1205 06:48:48.249462  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.249470  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:48.249481  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:48.249552  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:48.284921  485986 cri.go:89] found id: ""
	I1205 06:48:48.284935  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.284942  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:48.284947  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:48.285006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:48.315138  485986 cri.go:89] found id: ""
	I1205 06:48:48.315152  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.315159  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:48.315164  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:48.315223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:48.347265  485986 cri.go:89] found id: ""
	I1205 06:48:48.347279  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.347286  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:48.347293  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:48.347304  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.375662  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:48.375678  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:48.440841  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:48.440863  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:48.456128  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:48.456144  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:48.523196  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:48.523206  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:48.523216  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.100852  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:51.111413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:51.111475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:51.139392  485986 cri.go:89] found id: ""
	I1205 06:48:51.139406  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.139414  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:51.139419  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:51.139483  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:51.167265  485986 cri.go:89] found id: ""
	I1205 06:48:51.167279  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.167286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:51.167291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:51.167347  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:51.192337  485986 cri.go:89] found id: ""
	I1205 06:48:51.192351  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.192358  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:51.192363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:51.192419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:51.217599  485986 cri.go:89] found id: ""
	I1205 06:48:51.217614  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.217621  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:51.217627  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:51.217683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:51.242555  485986 cri.go:89] found id: ""
	I1205 06:48:51.242568  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.242576  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:51.242580  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:51.242641  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:51.270447  485986 cri.go:89] found id: ""
	I1205 06:48:51.270462  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.270469  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:51.270474  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:51.270551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:51.300340  485986 cri.go:89] found id: ""
	I1205 06:48:51.300353  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.300360  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:51.300375  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:51.300385  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:51.373583  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:51.373604  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:51.388609  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:51.388624  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:51.449562  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:51.449572  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:51.449584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.523352  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:51.523373  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.052404  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:54.065168  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:54.065280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:54.097086  485986 cri.go:89] found id: ""
	I1205 06:48:54.097102  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.097109  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:54.097114  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:54.097173  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:54.128973  485986 cri.go:89] found id: ""
	I1205 06:48:54.128988  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.128995  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:54.129000  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:54.129066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:54.163279  485986 cri.go:89] found id: ""
	I1205 06:48:54.163294  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.163301  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:54.163305  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:54.163363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:54.200034  485986 cri.go:89] found id: ""
	I1205 06:48:54.200049  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.200056  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:54.200061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:54.200119  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:54.232483  485986 cri.go:89] found id: ""
	I1205 06:48:54.232498  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.232504  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:54.232509  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:54.232572  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:54.256577  485986 cri.go:89] found id: ""
	I1205 06:48:54.256598  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.256606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:54.256611  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:54.256673  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:54.288762  485986 cri.go:89] found id: ""
	I1205 06:48:54.288788  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.288796  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:54.288804  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:54.288815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:54.368738  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:54.368758  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.395932  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:54.395948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:54.464047  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:54.464066  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:54.479400  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:54.479416  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:54.546819  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.047675  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:57.058076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:57.058143  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:57.082332  485986 cri.go:89] found id: ""
	I1205 06:48:57.082347  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.082355  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:57.082360  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:57.082442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:57.108051  485986 cri.go:89] found id: ""
	I1205 06:48:57.108071  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.108078  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:57.108083  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:57.108139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:57.137107  485986 cri.go:89] found id: ""
	I1205 06:48:57.137129  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.137136  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:57.137141  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:57.137198  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:57.163240  485986 cri.go:89] found id: ""
	I1205 06:48:57.163272  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.163279  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:57.163285  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:57.163352  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:57.192699  485986 cri.go:89] found id: ""
	I1205 06:48:57.192725  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.192735  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:57.192740  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:57.192807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:57.220916  485986 cri.go:89] found id: ""
	I1205 06:48:57.220931  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.220938  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:57.220943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:57.221010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:57.248028  485986 cri.go:89] found id: ""
	I1205 06:48:57.248042  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.248049  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:57.248057  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:57.248068  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:57.262955  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:57.262971  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:57.355127  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.355141  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:57.355151  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:57.433116  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:57.433135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:57.464587  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:57.464603  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.033434  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:00.083145  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:00.083219  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:00.188573  485986 cri.go:89] found id: ""
	I1205 06:49:00.188591  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.188607  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:00.188613  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:00.188683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:00.262241  485986 cri.go:89] found id: ""
	I1205 06:49:00.262258  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.262265  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:00.262271  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:00.262346  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:00.303849  485986 cri.go:89] found id: ""
	I1205 06:49:00.303866  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.303875  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:00.303881  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:00.303981  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:00.349047  485986 cri.go:89] found id: ""
	I1205 06:49:00.349063  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.349071  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:00.349076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:00.349147  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:00.379299  485986 cri.go:89] found id: ""
	I1205 06:49:00.379317  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.379325  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:00.379332  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:00.379419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:00.409559  485986 cri.go:89] found id: ""
	I1205 06:49:00.409575  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.409582  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:00.409589  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:00.409656  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:00.439884  485986 cri.go:89] found id: ""
	I1205 06:49:00.439899  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.439907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:00.439916  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:00.439933  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.508652  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:00.508672  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:00.524482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:00.524504  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:00.586066  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:00.586076  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:00.586087  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:00.663208  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:00.663229  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:03.193638  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:03.204025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:03.204086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:03.228565  485986 cri.go:89] found id: ""
	I1205 06:49:03.228579  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.228586  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:03.228592  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:03.228649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:03.254850  485986 cri.go:89] found id: ""
	I1205 06:49:03.254864  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.254871  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:03.254876  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:03.254937  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:03.289088  485986 cri.go:89] found id: ""
	I1205 06:49:03.289101  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.289108  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:03.289113  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:03.289194  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:03.322876  485986 cri.go:89] found id: ""
	I1205 06:49:03.322891  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.322905  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:03.322910  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:03.322971  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:03.352868  485986 cri.go:89] found id: ""
	I1205 06:49:03.352883  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.352890  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:03.352895  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:03.352957  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:03.381474  485986 cri.go:89] found id: ""
	I1205 06:49:03.381495  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.381502  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:03.381508  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:03.381569  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:03.410037  485986 cri.go:89] found id: ""
	I1205 06:49:03.410051  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.410058  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:03.410071  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:03.410081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:03.479009  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:03.479028  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:03.493685  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:03.493702  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:03.561170  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:03.561179  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:03.561190  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:03.638291  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:03.638315  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.175002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:06.185259  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:06.185319  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:06.215092  485986 cri.go:89] found id: ""
	I1205 06:49:06.215106  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.215113  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:06.215119  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:06.215175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:06.245195  485986 cri.go:89] found id: ""
	I1205 06:49:06.245209  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.245216  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:06.245221  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:06.245283  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:06.271319  485986 cri.go:89] found id: ""
	I1205 06:49:06.271333  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.271340  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:06.271346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:06.271404  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:06.300132  485986 cri.go:89] found id: ""
	I1205 06:49:06.300146  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.300152  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:06.300158  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:06.300216  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:06.337931  485986 cri.go:89] found id: ""
	I1205 06:49:06.337945  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.337952  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:06.337957  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:06.338017  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:06.365963  485986 cri.go:89] found id: ""
	I1205 06:49:06.365978  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.365985  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:06.365991  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:06.366048  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:06.396366  485986 cri.go:89] found id: ""
	I1205 06:49:06.396382  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.396389  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:06.396397  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:06.396410  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.424940  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:06.424956  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:06.490847  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:06.490864  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:06.506209  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:06.506225  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:06.572331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:06.572342  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:06.572352  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.157509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:09.167469  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:09.167529  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:09.192289  485986 cri.go:89] found id: ""
	I1205 06:49:09.192304  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.192311  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:09.192316  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:09.192375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:09.217082  485986 cri.go:89] found id: ""
	I1205 06:49:09.217096  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.217103  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:09.217108  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:09.217167  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:09.242357  485986 cri.go:89] found id: ""
	I1205 06:49:09.242371  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.242412  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:09.242417  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:09.242474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:09.267197  485986 cri.go:89] found id: ""
	I1205 06:49:09.267211  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.267218  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:09.267223  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:09.267282  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:09.302740  485986 cri.go:89] found id: ""
	I1205 06:49:09.302754  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.302761  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:09.302766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:09.302824  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:09.338883  485986 cri.go:89] found id: ""
	I1205 06:49:09.338910  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.338917  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:09.338923  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:09.338988  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:09.365834  485986 cri.go:89] found id: ""
	I1205 06:49:09.365848  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.365855  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:09.365862  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:09.365872  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:09.433408  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:09.433430  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:09.448763  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:09.448785  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:09.510400  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:09.510413  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:09.510424  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.589135  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:09.589155  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:12.118439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:12.128584  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:12.128642  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:12.153045  485986 cri.go:89] found id: ""
	I1205 06:49:12.153059  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.153066  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:12.153071  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:12.153138  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:12.181785  485986 cri.go:89] found id: ""
	I1205 06:49:12.181798  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.181805  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:12.181810  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:12.181867  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:12.208813  485986 cri.go:89] found id: ""
	I1205 06:49:12.208827  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.208834  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:12.208845  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:12.208903  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:12.234917  485986 cri.go:89] found id: ""
	I1205 06:49:12.234931  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.234938  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:12.234943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:12.235004  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:12.260438  485986 cri.go:89] found id: ""
	I1205 06:49:12.260452  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.260459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:12.260464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:12.260531  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:12.296968  485986 cri.go:89] found id: ""
	I1205 06:49:12.296981  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.296988  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:12.296994  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:12.297050  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:12.333915  485986 cri.go:89] found id: ""
	I1205 06:49:12.333929  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.333936  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:12.333943  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:12.333953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:12.406977  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:12.406998  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:12.422290  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:12.422306  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:12.488646  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:12.488656  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:12.488666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:12.564028  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:12.564050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.095313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:15.105802  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:15.105864  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:15.133035  485986 cri.go:89] found id: ""
	I1205 06:49:15.133049  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.133057  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:15.133062  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:15.133118  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:15.158425  485986 cri.go:89] found id: ""
	I1205 06:49:15.158439  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.158446  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:15.158451  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:15.158507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:15.183550  485986 cri.go:89] found id: ""
	I1205 06:49:15.183564  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.183571  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:15.183576  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:15.183637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:15.209390  485986 cri.go:89] found id: ""
	I1205 06:49:15.209405  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.209413  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:15.209418  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:15.209481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:15.234806  485986 cri.go:89] found id: ""
	I1205 06:49:15.234820  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.234828  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:15.234833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:15.234893  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:15.260606  485986 cri.go:89] found id: ""
	I1205 06:49:15.260621  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.260628  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:15.260633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:15.260689  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:15.291752  485986 cri.go:89] found id: ""
	I1205 06:49:15.291766  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.291773  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:15.291782  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:15.291793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:15.308482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:15.308499  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:15.380232  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:15.380242  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:15.380253  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:15.456924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:15.456947  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.486075  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:15.486091  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.055175  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:18.065657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:18.065716  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:18.092418  485986 cri.go:89] found id: ""
	I1205 06:49:18.092432  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.092440  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:18.092445  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:18.092504  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:18.119095  485986 cri.go:89] found id: ""
	I1205 06:49:18.119109  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.119116  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:18.119120  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:18.119174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:18.158317  485986 cri.go:89] found id: ""
	I1205 06:49:18.158331  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.158338  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:18.158343  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:18.158435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:18.182920  485986 cri.go:89] found id: ""
	I1205 06:49:18.182934  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.182941  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:18.182946  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:18.183006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:18.209415  485986 cri.go:89] found id: ""
	I1205 06:49:18.209430  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.209438  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:18.209443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:18.209512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:18.236631  485986 cri.go:89] found id: ""
	I1205 06:49:18.236644  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.236651  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:18.236656  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:18.236713  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:18.262726  485986 cri.go:89] found id: ""
	I1205 06:49:18.262740  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.262747  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:18.262754  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:18.262765  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.339996  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:18.340018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:18.358676  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:18.358696  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:18.426638  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:18.426647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:18.426706  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:18.504263  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:18.504284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:21.036369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:21.046428  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:21.046488  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:21.071147  485986 cri.go:89] found id: ""
	I1205 06:49:21.071161  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.071168  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:21.071173  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:21.071235  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:21.095397  485986 cri.go:89] found id: ""
	I1205 06:49:21.095412  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.095421  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:21.095426  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:21.095485  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:21.119759  485986 cri.go:89] found id: ""
	I1205 06:49:21.119773  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.119780  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:21.119786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:21.119850  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:21.144972  485986 cri.go:89] found id: ""
	I1205 06:49:21.144986  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.144993  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:21.144998  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:21.145054  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:21.170022  485986 cri.go:89] found id: ""
	I1205 06:49:21.170035  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.170042  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:21.170047  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:21.170104  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:21.198867  485986 cri.go:89] found id: ""
	I1205 06:49:21.198881  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.198887  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:21.198893  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:21.198948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:21.224547  485986 cri.go:89] found id: ""
	I1205 06:49:21.224561  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.224568  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:21.224575  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:21.224585  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:21.291060  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:21.291081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:21.308799  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:21.308815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:21.380254  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:21.380264  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:21.380275  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:21.456817  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:21.456838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:23.986703  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:23.996959  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:23.997028  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:24.026421  485986 cri.go:89] found id: ""
	I1205 06:49:24.026435  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.026443  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:24.026450  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:24.026512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:24.056568  485986 cri.go:89] found id: ""
	I1205 06:49:24.056582  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.056589  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:24.056595  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:24.056654  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:24.082518  485986 cri.go:89] found id: ""
	I1205 06:49:24.082532  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.082539  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:24.082544  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:24.082605  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:24.108752  485986 cri.go:89] found id: ""
	I1205 06:49:24.108766  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.108783  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:24.108788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:24.108854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:24.142101  485986 cri.go:89] found id: ""
	I1205 06:49:24.142133  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.142140  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:24.142146  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:24.142214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:24.169035  485986 cri.go:89] found id: ""
	I1205 06:49:24.169050  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.169057  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:24.169067  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:24.169139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:24.194140  485986 cri.go:89] found id: ""
	I1205 06:49:24.194154  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.194161  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:24.194169  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:24.194179  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:24.269020  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:24.269042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:24.319041  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:24.319057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:24.402423  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:24.402446  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:24.418669  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:24.418687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:24.486837  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:26.988496  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:26.998567  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:26.998632  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:27.030118  485986 cri.go:89] found id: ""
	I1205 06:49:27.030131  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.030138  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:27.030144  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:27.030200  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:27.057209  485986 cri.go:89] found id: ""
	I1205 06:49:27.057224  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.057230  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:27.057236  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:27.057291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:27.083393  485986 cri.go:89] found id: ""
	I1205 06:49:27.083408  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.083415  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:27.083420  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:27.083480  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:27.108369  485986 cri.go:89] found id: ""
	I1205 06:49:27.108383  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.108390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:27.108394  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:27.108454  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:27.136631  485986 cri.go:89] found id: ""
	I1205 06:49:27.136645  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.136653  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:27.136659  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:27.136726  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:27.163262  485986 cri.go:89] found id: ""
	I1205 06:49:27.163277  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.163286  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:27.163294  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:27.163353  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:27.188133  485986 cri.go:89] found id: ""
	I1205 06:49:27.188152  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.188160  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:27.188167  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:27.188177  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:27.252259  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:27.252270  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:27.252280  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:27.330222  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:27.330243  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:27.360158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:27.360174  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:27.433608  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:27.433628  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:29.949566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:29.960768  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:29.960834  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:29.986155  485986 cri.go:89] found id: ""
	I1205 06:49:29.986169  485986 logs.go:282] 0 containers: []
	W1205 06:49:29.986176  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:29.986181  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:29.986241  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:30.063119  485986 cri.go:89] found id: ""
	I1205 06:49:30.063137  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.063144  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:30.063163  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:30.063243  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:30.093759  485986 cri.go:89] found id: ""
	I1205 06:49:30.093774  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.093782  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:30.093788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:30.093860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:30.123430  485986 cri.go:89] found id: ""
	I1205 06:49:30.123452  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.123460  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:30.123465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:30.123554  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:30.151722  485986 cri.go:89] found id: ""
	I1205 06:49:30.151744  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.151752  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:30.151758  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:30.151820  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:30.186802  485986 cri.go:89] found id: ""
	I1205 06:49:30.186831  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.186852  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:30.186859  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:30.186929  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:30.213270  485986 cri.go:89] found id: ""
	I1205 06:49:30.213293  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.213301  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:30.213309  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:30.213320  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:30.279872  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:30.279893  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:30.296737  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:30.296759  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:30.374429  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:30.374439  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:30.374450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:30.450678  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:30.450701  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:32.984051  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:32.993990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:32.994049  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:33.020636  485986 cri.go:89] found id: ""
	I1205 06:49:33.020650  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.020657  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:33.020663  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:33.020719  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:33.049013  485986 cri.go:89] found id: ""
	I1205 06:49:33.049027  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.049034  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:33.049039  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:33.049098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:33.078567  485986 cri.go:89] found id: ""
	I1205 06:49:33.078581  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.078588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:33.078594  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:33.078652  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:33.103212  485986 cri.go:89] found id: ""
	I1205 06:49:33.103226  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.103233  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:33.103238  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:33.103293  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:33.127983  485986 cri.go:89] found id: ""
	I1205 06:49:33.127997  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.128004  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:33.128030  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:33.128085  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:33.153777  485986 cri.go:89] found id: ""
	I1205 06:49:33.153792  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.153799  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:33.153805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:33.153863  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:33.178536  485986 cri.go:89] found id: ""
	I1205 06:49:33.178550  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.178557  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:33.178565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:33.178576  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:33.244570  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:33.244594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:33.259835  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:33.259851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:33.338788  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:33.338799  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:33.338810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:33.425207  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:33.425236  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:35.956397  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:35.966480  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:35.966543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:35.995353  485986 cri.go:89] found id: ""
	I1205 06:49:35.995367  485986 logs.go:282] 0 containers: []
	W1205 06:49:35.995374  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:35.995378  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:35.995435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:36.024388  485986 cri.go:89] found id: ""
	I1205 06:49:36.024403  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.024410  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:36.024415  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:36.024477  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:36.051022  485986 cri.go:89] found id: ""
	I1205 06:49:36.051036  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.051054  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:36.051059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:36.051124  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:36.076096  485986 cri.go:89] found id: ""
	I1205 06:49:36.076110  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.076117  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:36.076123  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:36.076180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:36.105105  485986 cri.go:89] found id: ""
	I1205 06:49:36.105119  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.105127  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:36.105131  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:36.105187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:36.131094  485986 cri.go:89] found id: ""
	I1205 06:49:36.131107  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.131114  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:36.131120  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:36.131180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:36.160327  485986 cri.go:89] found id: ""
	I1205 06:49:36.160342  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.160349  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:36.160357  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:36.160367  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:36.175190  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:36.175205  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:36.236428  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:36.236479  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:36.236489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:36.320584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:36.320608  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:36.354951  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:36.354968  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:38.924529  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:38.934948  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:38.935008  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:38.961612  485986 cri.go:89] found id: ""
	I1205 06:49:38.961626  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.961633  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:38.961638  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:38.961699  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:38.987542  485986 cri.go:89] found id: ""
	I1205 06:49:38.987562  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.987569  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:38.987574  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:38.987637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:39.017388  485986 cri.go:89] found id: ""
	I1205 06:49:39.017402  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.017409  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:39.017414  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:39.017475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:39.043798  485986 cri.go:89] found id: ""
	I1205 06:49:39.043813  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.043821  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:39.043826  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:39.043883  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:39.072134  485986 cri.go:89] found id: ""
	I1205 06:49:39.072148  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.072155  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:39.072160  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:39.072214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:39.097127  485986 cri.go:89] found id: ""
	I1205 06:49:39.097141  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.097148  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:39.097154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:39.097215  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:39.125406  485986 cri.go:89] found id: ""
	I1205 06:49:39.125420  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.125427  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:39.125434  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:39.125447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:39.191762  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:39.191782  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:39.206972  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:39.206987  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:39.274830  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:39.274841  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:39.274851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:39.365052  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:39.365073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:41.896143  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:41.906833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:41.906906  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:41.934416  485986 cri.go:89] found id: ""
	I1205 06:49:41.934430  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.934437  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:41.934442  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:41.934498  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:41.962049  485986 cri.go:89] found id: ""
	I1205 06:49:41.962063  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.962079  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:41.962084  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:41.962150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:41.991028  485986 cri.go:89] found id: ""
	I1205 06:49:41.991042  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.991049  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:41.991053  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:41.991121  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:42.020514  485986 cri.go:89] found id: ""
	I1205 06:49:42.020536  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.020544  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:42.020550  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:42.020614  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:42.047453  485986 cri.go:89] found id: ""
	I1205 06:49:42.047467  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.047474  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:42.047479  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:42.047535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:42.079004  485986 cri.go:89] found id: ""
	I1205 06:49:42.079019  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.079026  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:42.079033  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:42.079098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:42.110781  485986 cri.go:89] found id: ""
	I1205 06:49:42.110806  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.110814  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:42.110821  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:42.110832  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:42.191665  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:42.191688  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:42.241592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:42.241609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:42.314021  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:42.314041  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:42.331123  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:42.331139  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:42.401371  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:44.902557  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:44.913856  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:44.913928  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:44.944329  485986 cri.go:89] found id: ""
	I1205 06:49:44.944343  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.944350  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:44.944355  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:44.944411  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:44.972877  485986 cri.go:89] found id: ""
	I1205 06:49:44.972890  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.972897  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:44.972902  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:44.972961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:44.997771  485986 cri.go:89] found id: ""
	I1205 06:49:44.997785  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.997792  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:44.997797  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:44.997858  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:45.044196  485986 cri.go:89] found id: ""
	I1205 06:49:45.044212  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.044220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:45.044225  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:45.044296  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:45.100218  485986 cri.go:89] found id: ""
	I1205 06:49:45.100234  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.100242  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:45.100247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:45.100322  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:45.143680  485986 cri.go:89] found id: ""
	I1205 06:49:45.143696  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.143704  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:45.143710  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:45.144010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:45.184794  485986 cri.go:89] found id: ""
	I1205 06:49:45.184810  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.184818  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:45.184827  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:45.184840  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:45.266987  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:45.267020  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:45.286876  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:45.286913  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:45.370968  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:45.370979  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:45.370991  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:45.446768  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:45.446788  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:47.979096  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:47.989170  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:47.989236  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:48.018828  485986 cri.go:89] found id: ""
	I1205 06:49:48.018841  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.018849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:48.018854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:48.018915  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:48.048874  485986 cri.go:89] found id: ""
	I1205 06:49:48.048888  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.048895  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:48.048901  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:48.048960  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:48.075707  485986 cri.go:89] found id: ""
	I1205 06:49:48.075722  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.075728  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:48.075733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:48.075792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:48.100630  485986 cri.go:89] found id: ""
	I1205 06:49:48.100644  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.100651  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:48.100657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:48.100715  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:48.126176  485986 cri.go:89] found id: ""
	I1205 06:49:48.126190  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.126197  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:48.126202  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:48.126266  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:48.153143  485986 cri.go:89] found id: ""
	I1205 06:49:48.153157  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.153170  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:48.153181  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:48.153249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:48.179066  485986 cri.go:89] found id: ""
	I1205 06:49:48.179080  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.179087  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:48.179094  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:48.179104  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:48.238867  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:48.238878  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:48.238892  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:48.318473  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:48.318493  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:48.351978  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:48.352000  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:48.421167  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:48.421187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:50.939180  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:50.949233  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:50.949290  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:50.978828  485986 cri.go:89] found id: ""
	I1205 06:49:50.978842  485986 logs.go:282] 0 containers: []
	W1205 06:49:50.978849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:50.978854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:50.978910  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:51.004445  485986 cri.go:89] found id: ""
	I1205 06:49:51.004461  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.004469  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:51.004475  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:51.004545  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:51.032998  485986 cri.go:89] found id: ""
	I1205 06:49:51.033012  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.033019  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:51.033025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:51.033080  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:51.058907  485986 cri.go:89] found id: ""
	I1205 06:49:51.058921  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.058929  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:51.058934  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:51.058998  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:51.088751  485986 cri.go:89] found id: ""
	I1205 06:49:51.088765  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.088773  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:51.088778  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:51.088836  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:51.114739  485986 cri.go:89] found id: ""
	I1205 06:49:51.114753  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.114760  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:51.114766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:51.114827  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:51.146228  485986 cri.go:89] found id: ""
	I1205 06:49:51.146242  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.146249  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:51.146257  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:51.146267  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:51.213460  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:51.213479  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:51.228827  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:51.228842  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:51.295308  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:51.295318  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:51.295328  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:51.378866  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:51.378887  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:53.908370  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:53.918562  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:53.918621  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:53.944262  485986 cri.go:89] found id: ""
	I1205 06:49:53.944277  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.944284  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:53.944289  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:53.944349  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:53.969495  485986 cri.go:89] found id: ""
	I1205 06:49:53.969509  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.969516  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:53.969522  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:53.969602  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:53.996074  485986 cri.go:89] found id: ""
	I1205 06:49:53.996088  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.996095  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:53.996100  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:53.996155  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:54.023768  485986 cri.go:89] found id: ""
	I1205 06:49:54.023783  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.023790  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:54.023796  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:54.023854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:54.048370  485986 cri.go:89] found id: ""
	I1205 06:49:54.048385  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.048392  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:54.048397  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:54.048458  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:54.073241  485986 cri.go:89] found id: ""
	I1205 06:49:54.073255  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.073263  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:54.073268  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:54.073329  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:54.098794  485986 cri.go:89] found id: ""
	I1205 06:49:54.098808  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.098816  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:54.098824  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:54.098833  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:54.165835  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:54.165854  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:54.181432  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:54.181447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:54.255506  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:54.255516  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:54.255529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:54.341643  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:54.341666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:56.871077  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:56.883786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:56.883848  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:56.913242  485986 cri.go:89] found id: ""
	I1205 06:49:56.913255  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.913262  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:56.913268  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:56.913325  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:56.940834  485986 cri.go:89] found id: ""
	I1205 06:49:56.940849  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.940856  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:56.940863  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:56.940923  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:56.969612  485986 cri.go:89] found id: ""
	I1205 06:49:56.969626  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.969633  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:56.969639  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:56.969698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:56.996324  485986 cri.go:89] found id: ""
	I1205 06:49:56.996338  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.996345  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:56.996351  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:56.996412  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:57.023385  485986 cri.go:89] found id: ""
	I1205 06:49:57.023399  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.023407  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:57.023412  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:57.023470  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:57.047721  485986 cri.go:89] found id: ""
	I1205 06:49:57.047734  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.047741  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:57.047747  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:57.047803  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:57.072770  485986 cri.go:89] found id: ""
	I1205 06:49:57.072783  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.072790  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:57.072798  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:57.072807  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:57.137878  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:57.137898  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:57.153088  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:57.153110  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:57.215030  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:57.215041  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:57.215057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:57.298537  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:57.298556  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:59.836134  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:59.846404  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:59.846463  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:59.871308  485986 cri.go:89] found id: ""
	I1205 06:49:59.871322  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.871329  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:59.871333  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:59.871389  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:59.897753  485986 cri.go:89] found id: ""
	I1205 06:49:59.897767  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.897774  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:59.897779  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:59.897840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:59.922634  485986 cri.go:89] found id: ""
	I1205 06:49:59.922649  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.922655  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:59.922661  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:59.922721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:59.946450  485986 cri.go:89] found id: ""
	I1205 06:49:59.946463  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.946473  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:59.946478  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:59.946535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:59.972723  485986 cri.go:89] found id: ""
	I1205 06:49:59.972738  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.972745  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:59.972750  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:59.972809  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:00.021990  485986 cri.go:89] found id: ""
	I1205 06:50:00.022006  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.022014  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:00.022020  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:00.022097  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:00.144138  485986 cri.go:89] found id: ""
	I1205 06:50:00.144154  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.144162  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:00.144171  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:00.144184  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:00.257253  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:00.257284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:00.303408  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:00.303429  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:00.439913  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:00.439925  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:00.439937  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:00.532383  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:00.532408  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:03.067932  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:03.078353  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:03.078441  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:03.108943  485986 cri.go:89] found id: ""
	I1205 06:50:03.108957  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.108964  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:03.108969  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:03.109032  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:03.139046  485986 cri.go:89] found id: ""
	I1205 06:50:03.139060  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.139077  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:03.139082  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:03.139150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:03.166455  485986 cri.go:89] found id: ""
	I1205 06:50:03.166470  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.166479  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:03.166485  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:03.166587  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:03.195955  485986 cri.go:89] found id: ""
	I1205 06:50:03.195969  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.195976  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:03.195981  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:03.196037  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:03.221513  485986 cri.go:89] found id: ""
	I1205 06:50:03.221527  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.221539  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:03.221545  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:03.221616  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:03.250570  485986 cri.go:89] found id: ""
	I1205 06:50:03.250583  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.250589  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:03.250595  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:03.250649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:03.278449  485986 cri.go:89] found id: ""
	I1205 06:50:03.278463  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.278470  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:03.278477  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:03.278488  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:03.355784  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:03.355803  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:03.375344  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:03.375365  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:03.438665  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:03.438679  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:03.438690  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:03.518012  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:03.518040  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:06.053429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:06.064448  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:06.064511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:06.091072  485986 cri.go:89] found id: ""
	I1205 06:50:06.091087  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.091094  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:06.091100  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:06.091166  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:06.119823  485986 cri.go:89] found id: ""
	I1205 06:50:06.119837  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.119844  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:06.119849  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:06.119905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:06.148798  485986 cri.go:89] found id: ""
	I1205 06:50:06.148812  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.148819  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:06.148824  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:06.148880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:06.179319  485986 cri.go:89] found id: ""
	I1205 06:50:06.179334  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.179341  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:06.179346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:06.179402  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:06.204637  485986 cri.go:89] found id: ""
	I1205 06:50:06.204652  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.204659  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:06.204665  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:06.204727  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:06.232891  485986 cri.go:89] found id: ""
	I1205 06:50:06.232906  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.232913  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:06.232919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:06.232977  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:06.260874  485986 cri.go:89] found id: ""
	I1205 06:50:06.260888  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.260895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:06.260904  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:06.260914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:06.331930  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:06.331950  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:06.349062  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:06.349078  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:06.413245  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:06.413254  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:06.413265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:06.491562  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:06.491584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:09.021435  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:09.031990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:09.032051  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:09.057732  485986 cri.go:89] found id: ""
	I1205 06:50:09.057746  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.057753  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:09.057758  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:09.057814  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:09.085296  485986 cri.go:89] found id: ""
	I1205 06:50:09.085309  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.085316  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:09.085321  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:09.085377  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:09.113133  485986 cri.go:89] found id: ""
	I1205 06:50:09.113147  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.113154  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:09.113159  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:09.113221  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:09.139103  485986 cri.go:89] found id: ""
	I1205 06:50:09.139117  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.139125  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:09.139130  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:09.139196  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:09.171980  485986 cri.go:89] found id: ""
	I1205 06:50:09.171995  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.172005  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:09.172011  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:09.172066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:09.197034  485986 cri.go:89] found id: ""
	I1205 06:50:09.197048  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.197055  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:09.197059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:09.197122  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:09.222626  485986 cri.go:89] found id: ""
	I1205 06:50:09.222641  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.222649  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:09.222656  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:09.222667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:09.288268  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:09.288287  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:09.304011  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:09.304027  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:09.378142  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:09.378151  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:09.378162  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:09.455057  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:09.455077  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:11.984604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:11.994696  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:11.994758  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:12.021691  485986 cri.go:89] found id: ""
	I1205 06:50:12.021706  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.021713  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:12.021718  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:12.021777  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:12.049086  485986 cri.go:89] found id: ""
	I1205 06:50:12.049099  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.049106  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:12.049111  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:12.049170  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:12.077335  485986 cri.go:89] found id: ""
	I1205 06:50:12.077348  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.077355  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:12.077360  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:12.077419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:12.104976  485986 cri.go:89] found id: ""
	I1205 06:50:12.104990  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.104998  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:12.105003  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:12.105065  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:12.130275  485986 cri.go:89] found id: ""
	I1205 06:50:12.130289  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.130297  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:12.130303  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:12.130359  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:12.156777  485986 cri.go:89] found id: ""
	I1205 06:50:12.156791  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.156798  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:12.156804  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:12.156862  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:12.184468  485986 cri.go:89] found id: ""
	I1205 06:50:12.184482  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.184489  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:12.184496  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:12.184506  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:12.250190  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:12.250212  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:12.265279  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:12.265295  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:12.350637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:12.350648  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:12.350659  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:12.429523  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:12.429548  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:14.958454  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:14.970034  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:14.970110  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:14.996731  485986 cri.go:89] found id: ""
	I1205 06:50:14.996754  485986 logs.go:282] 0 containers: []
	W1205 06:50:14.996761  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:14.996767  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:14.996833  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:15.032417  485986 cri.go:89] found id: ""
	I1205 06:50:15.032440  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.032448  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:15.032454  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:15.032524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:15.060989  485986 cri.go:89] found id: ""
	I1205 06:50:15.061008  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.061016  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:15.061022  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:15.061083  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:15.088194  485986 cri.go:89] found id: ""
	I1205 06:50:15.088208  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.088215  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:15.088221  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:15.088280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:15.115923  485986 cri.go:89] found id: ""
	I1205 06:50:15.115938  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.115945  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:15.115951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:15.116010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:15.146014  485986 cri.go:89] found id: ""
	I1205 06:50:15.146028  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.146035  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:15.146041  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:15.146150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:15.173160  485986 cri.go:89] found id: ""
	I1205 06:50:15.173175  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.173191  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:15.173199  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:15.173208  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:15.245690  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:15.245700  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:15.245710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:15.325395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:15.325417  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:15.356222  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:15.356276  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:15.428176  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:15.428198  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:17.943733  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:17.954302  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:17.954363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:17.979858  485986 cri.go:89] found id: ""
	I1205 06:50:17.979872  485986 logs.go:282] 0 containers: []
	W1205 06:50:17.979879  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:17.979884  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:17.979948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:18.013482  485986 cri.go:89] found id: ""
	I1205 06:50:18.013497  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.013504  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:18.013509  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:18.013593  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:18.040079  485986 cri.go:89] found id: ""
	I1205 06:50:18.040094  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.040102  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:18.040108  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:18.040172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:18.066285  485986 cri.go:89] found id: ""
	I1205 06:50:18.066300  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.066308  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:18.066312  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:18.066369  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:18.091446  485986 cri.go:89] found id: ""
	I1205 06:50:18.091461  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.091468  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:18.091473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:18.091532  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:18.121218  485986 cri.go:89] found id: ""
	I1205 06:50:18.121234  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.121241  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:18.121247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:18.121306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:18.147004  485986 cri.go:89] found id: ""
	I1205 06:50:18.147018  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.147032  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:18.147039  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:18.147050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:18.212973  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:18.212983  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:18.212993  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:18.290491  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:18.290510  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:18.319970  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:18.319986  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:18.392419  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:18.392440  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:20.907875  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:20.918552  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:20.918615  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:20.948914  485986 cri.go:89] found id: ""
	I1205 06:50:20.948928  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.948935  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:20.948941  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:20.948999  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:20.974289  485986 cri.go:89] found id: ""
	I1205 06:50:20.974303  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.974310  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:20.974315  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:20.974371  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:20.999954  485986 cri.go:89] found id: ""
	I1205 06:50:20.999968  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.999976  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:20.999980  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:21.000038  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:21.029788  485986 cri.go:89] found id: ""
	I1205 06:50:21.029803  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.029810  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:21.029815  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:21.029875  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:21.055163  485986 cri.go:89] found id: ""
	I1205 06:50:21.055177  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.055183  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:21.055188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:21.055246  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:21.080955  485986 cri.go:89] found id: ""
	I1205 06:50:21.080969  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.080977  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:21.080982  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:21.081052  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:21.108615  485986 cri.go:89] found id: ""
	I1205 06:50:21.108629  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.108637  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:21.108644  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:21.108655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:21.173790  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:21.173811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:21.188952  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:21.188969  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:21.253459  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:21.253469  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:21.253480  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:21.337063  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:21.337084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:23.866768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:23.877363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:23.877430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:23.903790  485986 cri.go:89] found id: ""
	I1205 06:50:23.903807  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.903814  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:23.903819  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:23.903880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:23.933319  485986 cri.go:89] found id: ""
	I1205 06:50:23.933333  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.933341  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:23.933346  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:23.933403  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:23.959901  485986 cri.go:89] found id: ""
	I1205 06:50:23.959914  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.959922  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:23.959927  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:23.959987  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:23.986070  485986 cri.go:89] found id: ""
	I1205 06:50:23.986083  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.986090  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:23.986096  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:23.986154  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:24.014309  485986 cri.go:89] found id: ""
	I1205 06:50:24.014324  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.014331  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:24.014336  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:24.014422  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:24.040569  485986 cri.go:89] found id: ""
	I1205 06:50:24.040590  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.040598  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:24.040603  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:24.040663  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:24.066648  485986 cri.go:89] found id: ""
	I1205 06:50:24.066661  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.066669  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:24.066676  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:24.066687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:24.145239  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:24.145259  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:24.173133  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:24.173149  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:24.238469  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:24.238489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:24.253802  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:24.253821  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:24.341051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:26.841329  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:26.852711  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:26.852792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:26.878845  485986 cri.go:89] found id: ""
	I1205 06:50:26.878858  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.878865  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:26.878871  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:26.878926  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:26.903460  485986 cri.go:89] found id: ""
	I1205 06:50:26.903475  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.903482  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:26.903487  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:26.903543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:26.928316  485986 cri.go:89] found id: ""
	I1205 06:50:26.928330  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.928337  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:26.928342  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:26.928401  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:26.957464  485986 cri.go:89] found id: ""
	I1205 06:50:26.957477  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.957484  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:26.957490  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:26.957547  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:26.985494  485986 cri.go:89] found id: ""
	I1205 06:50:26.985508  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.985515  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:26.985520  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:26.985588  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:27.012077  485986 cri.go:89] found id: ""
	I1205 06:50:27.012092  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.012099  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:27.012105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:27.012164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:27.037759  485986 cri.go:89] found id: ""
	I1205 06:50:27.037772  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.037779  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:27.037802  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:27.037813  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:27.068005  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:27.068022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:27.132023  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:27.132042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:27.147964  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:27.147981  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:27.210077  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:27.210087  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:27.210098  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:29.784398  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:29.794460  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:29.794523  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:29.820207  485986 cri.go:89] found id: ""
	I1205 06:50:29.820221  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.820228  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:29.820235  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:29.820301  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:29.845407  485986 cri.go:89] found id: ""
	I1205 06:50:29.845421  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.845429  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:29.845434  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:29.845494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:29.871350  485986 cri.go:89] found id: ""
	I1205 06:50:29.871364  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.871371  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:29.871376  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:29.871434  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:29.896668  485986 cri.go:89] found id: ""
	I1205 06:50:29.896682  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.896689  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:29.896694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:29.896753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:29.925230  485986 cri.go:89] found id: ""
	I1205 06:50:29.925243  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.925250  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:29.925256  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:29.925320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:29.950431  485986 cri.go:89] found id: ""
	I1205 06:50:29.950445  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.950453  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:29.950459  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:29.950516  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:29.975493  485986 cri.go:89] found id: ""
	I1205 06:50:29.975507  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.975514  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:29.975522  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:29.975532  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:29.990544  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:29.990561  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:30.089331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:30.089343  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:30.089355  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:30.176998  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:30.177019  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:30.207325  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:30.207342  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:32.779616  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:32.789524  485986 kubeadm.go:602] duration metric: took 4m3.78523296s to restartPrimaryControlPlane
	W1205 06:50:32.789596  485986 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:50:32.789791  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:50:33.200382  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:50:33.213168  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:50:33.221236  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:50:33.221295  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:50:33.229165  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:50:33.229174  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:50:33.229226  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:50:33.236961  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:50:33.237026  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:50:33.244309  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:50:33.252201  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:50:33.252257  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:50:33.259677  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.267359  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:50:33.267427  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.275464  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:50:33.283208  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:50:33.283271  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:50:33.290746  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:50:33.405156  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:50:33.405615  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:50:33.478173  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:54:34.582933  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:54:34.582957  485986 kubeadm.go:319] 
	I1205 06:54:34.583076  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:54:34.588185  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:34.588247  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:34.588363  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:34.588446  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:34.588482  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:34.588527  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:34.588597  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:34.588649  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:34.588697  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:34.588744  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:34.588792  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:34.588836  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:34.588883  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:34.588934  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:34.589006  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:34.589099  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:34.589189  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:34.589249  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:34.592315  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:34.592403  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:34.592463  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:34.592535  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:34.592603  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:34.592668  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:34.592743  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:34.592810  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:34.592871  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:34.592953  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:34.593046  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:34.593088  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:34.593139  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:34.593190  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:34.593242  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:34.593294  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:34.593352  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:34.593406  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:34.593499  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:34.593561  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:34.596524  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:34.596625  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:34.596698  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:34.596789  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:34.596910  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:34.597004  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:34.597119  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:34.597212  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:34.597250  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:34.597382  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:34.597485  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:54:34.597547  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00128632s
	I1205 06:54:34.597550  485986 kubeadm.go:319] 
	I1205 06:54:34.597605  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:54:34.597636  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:54:34.597743  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:54:34.597746  485986 kubeadm.go:319] 
	I1205 06:54:34.597848  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:54:34.597879  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:54:34.597909  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 06:54:34.598022  485986 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128632s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:54:34.598117  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:54:34.598416  485986 kubeadm.go:319] 
	I1205 06:54:35.010606  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:54:35.026641  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:54:35.026696  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:54:35.034906  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:54:35.034914  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:54:35.034968  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:54:35.043100  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:54:35.043156  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:54:35.050682  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:54:35.058435  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:54:35.058491  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:54:35.066352  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.075006  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:54:35.075083  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.083161  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:54:35.091527  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:54:35.091591  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:54:35.099509  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:54:35.143144  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:35.143194  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:35.214737  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:35.214806  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:35.214841  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:35.214894  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:35.214941  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:35.214988  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:35.215036  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:35.215082  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:35.215135  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:35.215179  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:35.215227  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:35.215272  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:35.280867  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:35.280975  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:35.281065  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:35.290789  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:35.294356  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:35.294469  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:35.294532  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:35.294608  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:35.294667  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:35.294735  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:35.294788  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:35.294850  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:35.294910  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:35.294989  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:35.295060  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:35.295097  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:35.295152  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:35.600230  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:35.819372  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:36.031672  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:36.347784  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:36.515743  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:36.516403  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:36.519035  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:36.522469  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:36.522648  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:36.522737  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:36.522811  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:36.538750  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:36.538854  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:36.547809  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:36.548944  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:36.549484  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:36.685042  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:36.685156  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:58:36.684952  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233025s
	I1205 06:58:36.684982  485986 kubeadm.go:319] 
	I1205 06:58:36.685040  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:58:36.685073  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:58:36.685203  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:58:36.685213  485986 kubeadm.go:319] 
	I1205 06:58:36.685319  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:58:36.685352  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:58:36.685382  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:58:36.685386  485986 kubeadm.go:319] 
	I1205 06:58:36.690024  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:58:36.690504  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:58:36.690648  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:58:36.690898  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:58:36.690904  485986 kubeadm.go:319] 
	I1205 06:58:36.690971  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:58:36.691025  485986 kubeadm.go:403] duration metric: took 12m7.722207493s to StartCluster
	I1205 06:58:36.691058  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:58:36.691120  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:58:36.717503  485986 cri.go:89] found id: ""
	I1205 06:58:36.717522  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.717530  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:58:36.717535  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:58:36.717599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:58:36.742068  485986 cri.go:89] found id: ""
	I1205 06:58:36.742083  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.742090  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:58:36.742095  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:58:36.742150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:58:36.766426  485986 cri.go:89] found id: ""
	I1205 06:58:36.766439  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.766446  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:58:36.766452  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:58:36.766507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:58:36.791681  485986 cri.go:89] found id: ""
	I1205 06:58:36.791696  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.791703  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:58:36.791707  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:58:36.791767  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:58:36.816243  485986 cri.go:89] found id: ""
	I1205 06:58:36.816257  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.816264  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:58:36.816269  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:58:36.816323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:58:36.841386  485986 cri.go:89] found id: ""
	I1205 06:58:36.841399  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.841406  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:58:36.841411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:58:36.841467  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:58:36.866554  485986 cri.go:89] found id: ""
	I1205 06:58:36.866568  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.866575  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:58:36.866584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:58:36.866594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:58:36.900565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:58:36.900582  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:58:36.968215  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:58:36.968234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:58:36.983291  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:58:36.983307  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:58:37.054622  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:58:37.054632  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:58:37.054644  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1205 06:58:37.134983  485986 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:58:37.135045  485986 out.go:285] * 
	W1205 06:58:37.135160  485986 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.135223  485986 out.go:285] * 
	W1205 06:58:37.137432  485986 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:58:37.142766  485986 out.go:203] 
	W1205 06:58:37.146311  485986 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.146363  485986 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:58:37.146497  485986 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:58:37.150301  485986 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571372953Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571407981Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571472524Z" level=info msg="Create NRI interface"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571571528Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571581186Z" level=info msg="runtime interface created"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571594224Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571600657Z" level=info msg="runtime interface starting up..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571606548Z" level=info msg="starting plugins..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571619709Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571689602Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:46:27 functional-787602 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.481366601Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1e822775-5cef-40d3-9686-eee6d086f1b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482224852Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e1ef1844-3877-40e0-84c2-d1c873b40d24 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482740149Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=56b0a0d4-9f66-4348-9e04-1e53dd2684db name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483228025Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=beb5cc41-ecba-44e2-8431-8eb7caf9e6f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483764967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d6fbbe20-116f-42f6-8365-a643bfd6a022 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484325426Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cc465c27-997c-4720-add0-d2aaefef1742 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484777542Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5847487f-12af-4b83-83de-0b1cf4bc7dd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.284218578Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=9fcc6ad9-fc72-42e2-9eb3-af609b8c0fda name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285002572Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0a9c3300-2647-489a-a8c7-299acd2c2ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285494328Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ef012813-7294-42de-84e3-c56b0aecceed name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285987553Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=97a76923-ddd0-413b-afdb-1a86b6e1781b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.286464253Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=72332317-e652-4a97-9d17-3ba7818fe38f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.28695984Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=443ec697-27e1-4420-9454-8afdb0ee65b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.287383469Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7f6d73c6-60bf-4743-9f1c-60ae6c282918 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:58:38.369386   21862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:38.369823   21862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:38.371649   21862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:38.372376   21862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:38.373993   21862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:58:38 up  3:40,  0 user,  load average: 0.11, 0.21, 0.35
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:58:35 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:36 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 639.
	Dec 05 06:58:36 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:36 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:36 functional-787602 kubelet[21673]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:36 functional-787602 kubelet[21673]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:36 functional-787602 kubelet[21673]: E1205 06:58:36.322198   21673 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:36 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:36 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:37 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 05 06:58:37 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:37 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:37 functional-787602 kubelet[21757]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:37 functional-787602 kubelet[21757]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:37 functional-787602 kubelet[21757]: E1205 06:58:37.103184   21757 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:37 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:37 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:37 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 05 06:58:37 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:37 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:37 functional-787602 kubelet[21780]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:37 functional-787602 kubelet[21780]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:37 functional-787602 kubelet[21780]: E1205 06:58:37.849297   21780 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:37 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:37 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (354.504802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-787602 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-787602 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (64.594395ms)

                                                
                                                
** stderr ** 
	E1205 06:58:39.360499  498122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 06:58:39.361947  498122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 06:58:39.363342  498122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 06:58:39.364682  498122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 06:58:39.366020  498122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-787602 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (299.563955ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-252233 ssh pgrep buildkitd                                                                                                             │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ image   │ functional-252233 image ls --format yaml --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr                                            │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format json --alsologtostderr                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls --format table --alsologtostderr                                                                                       │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ image   │ functional-252233 image ls                                                                                                                        │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ delete  │ -p functional-252233                                                                                                                              │ functional-252233 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │ 05 Dec 25 06:31 UTC │
	│ start   │ -p functional-787602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:31 UTC │                     │
	│ start   │ -p functional-787602 --alsologtostderr -v=8                                                                                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:39 UTC │                     │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add registry.k8s.io/pause:latest                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache add minikube-local-cache-test:functional-787602                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ functional-787602 cache delete minikube-local-cache-test:functional-787602                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl images                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ cache   │ functional-787602 cache reload                                                                                                                    │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ kubectl │ functional-787602 kubectl -- --context functional-787602 get pods                                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ start   │ -p functional-787602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:46:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:46:23.060483  485986 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:46:23.060587  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060592  485986 out.go:374] Setting ErrFile to fd 2...
	I1205 06:46:23.060596  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060943  485986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:46:23.061383  485986 out.go:368] Setting JSON to false
	I1205 06:46:23.062251  485986 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12510,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:46:23.062334  485986 start.go:143] virtualization:  
	I1205 06:46:23.066082  485986 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:46:23.069981  485986 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:46:23.070104  485986 notify.go:221] Checking for updates...
	I1205 06:46:23.076003  485986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:46:23.078837  485986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:46:23.081722  485986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:46:23.084680  485986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:46:23.087568  485986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:46:23.090922  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:23.091022  485986 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:46:23.121487  485986 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:46:23.121590  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.189036  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.180099644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.189132  485986 docker.go:319] overlay module found
	I1205 06:46:23.192176  485986 out.go:179] * Using the docker driver based on existing profile
	I1205 06:46:23.195026  485986 start.go:309] selected driver: docker
	I1205 06:46:23.195034  485986 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.195143  485986 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:46:23.195245  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.259735  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.25087077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.260168  485986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:46:23.260193  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:23.260245  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:23.260292  485986 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.263405  485986 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:46:23.266278  485986 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:46:23.269305  485986 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:46:23.272128  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:23.272198  485986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:46:23.291679  485986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:46:23.291691  485986 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:46:23.331907  485986 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:46:24.681828  485986 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:46:24.681963  485986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:46:24.682057  485986 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682183  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:46:24.682191  485986 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.111µs
	I1205 06:46:24.682203  485986 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:46:24.682212  485986 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682238  485986 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:46:24.682242  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:46:24.682246  485986 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.143µs
	I1205 06:46:24.682251  485986 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682266  485986 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682260  485986 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682305  485986 start.go:364] duration metric: took 27.331µs to acquireMachinesLock for "functional-787602"
	I1205 06:46:24.682303  485986 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682317  485986 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:46:24.682322  485986 fix.go:54] fixHost starting: 
	I1205 06:46:24.682343  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:46:24.682348  485986 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.295µs
	I1205 06:46:24.682354  485986 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:46:24.682364  485986 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682416  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:46:24.682421  485986 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.691µs
	I1205 06:46:24.682428  485986 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682437  485986 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682462  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:46:24.682466  485986 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.31µs
	I1205 06:46:24.682471  485986 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682480  485986 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682514  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:46:24.682518  485986 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.27µs
	I1205 06:46:24.682523  485986 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:46:24.682531  485986 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682555  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:46:24.682558  485986 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.283µs
	I1205 06:46:24.682568  485986 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:46:24.682583  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:46:24.682587  485986 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 328.529µs
	I1205 06:46:24.682591  485986 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682599  485986 cache.go:87] Successfully saved all images to host disk.
	I1205 06:46:24.682614  485986 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:46:24.699421  485986 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:46:24.699440  485986 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:46:24.704636  485986 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:46:24.704669  485986 machine.go:94] provisionDockerMachine start ...
	I1205 06:46:24.704752  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.722297  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.722651  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.722658  485986 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:46:24.869775  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:24.869801  485986 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:46:24.869864  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.887234  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.887558  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.887567  485986 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:46:25.047727  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:25.047810  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.066336  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.066675  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.066689  485986 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:46:25.218719  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:46:25.218735  485986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:46:25.218754  485986 ubuntu.go:190] setting up certificates
	I1205 06:46:25.218762  485986 provision.go:84] configureAuth start
	I1205 06:46:25.218833  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:25.236317  485986 provision.go:143] copyHostCerts
	I1205 06:46:25.236383  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:46:25.236396  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:46:25.236468  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:46:25.236562  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:46:25.236565  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:46:25.236589  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:46:25.236636  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:46:25.236640  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:46:25.236661  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:46:25.236704  485986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:46:25.509369  485986 provision.go:177] copyRemoteCerts
	I1205 06:46:25.509433  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:46:25.509483  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.526532  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:25.630074  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:46:25.647569  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:46:25.665563  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:46:25.683160  485986 provision.go:87] duration metric: took 464.374115ms to configureAuth
	I1205 06:46:25.683179  485986 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:46:25.683380  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:25.683487  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.701466  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.701775  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.701787  485986 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:46:26.045147  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:46:26.045161  485986 machine.go:97] duration metric: took 1.340485738s to provisionDockerMachine
	I1205 06:46:26.045171  485986 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:46:26.045182  485986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:46:26.045240  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:46:26.045301  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.071462  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.178226  485986 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:46:26.181599  485986 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:46:26.181617  485986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:46:26.181627  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:46:26.181684  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:46:26.181759  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:46:26.181833  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:46:26.181875  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:46:26.189500  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:26.206597  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:46:26.223486  485986 start.go:296] duration metric: took 178.3022ms for postStartSetup
	I1205 06:46:26.223577  485986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:46:26.223614  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.239842  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.339498  485986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:46:26.344313  485986 fix.go:56] duration metric: took 1.66198384s for fixHost
	I1205 06:46:26.344329  485986 start.go:83] releasing machines lock for "functional-787602", held for 1.662017843s
	I1205 06:46:26.344396  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:26.361695  485986 ssh_runner.go:195] Run: cat /version.json
	I1205 06:46:26.361744  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.361773  485986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:46:26.361823  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.380556  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.389997  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.566296  485986 ssh_runner.go:195] Run: systemctl --version
	I1205 06:46:26.572676  485986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:46:26.609041  485986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:46:26.613450  485986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:46:26.613514  485986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:46:26.621451  485986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:46:26.621466  485986 start.go:496] detecting cgroup driver to use...
	I1205 06:46:26.621496  485986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:46:26.621543  485986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:46:26.637300  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:46:26.650753  485986 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:46:26.650821  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:46:26.666902  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:46:26.680209  485986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:46:26.795240  485986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:46:26.925661  485986 docker.go:234] disabling docker service ...
	I1205 06:46:26.925721  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:46:26.941529  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:46:26.954708  485986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:46:27.063545  485986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:46:27.175808  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:46:27.188517  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:46:27.203590  485986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:46:27.203644  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.212003  485986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:46:27.212066  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.220691  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.229907  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.238922  485986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:46:27.247339  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.256340  485986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.264720  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.273692  485986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:46:27.281324  485986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:46:27.288509  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.394627  485986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:46:27.581943  485986 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:46:27.582023  485986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:46:27.586836  485986 start.go:564] Will wait 60s for crictl version
	I1205 06:46:27.586892  485986 ssh_runner.go:195] Run: which crictl
	I1205 06:46:27.591027  485986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:46:27.618052  485986 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:46:27.618154  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.654922  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.689535  485986 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:46:27.692450  485986 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:46:27.709456  485986 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:46:27.716890  485986 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:46:27.719774  485986 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:46:27.719904  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:27.719957  485986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:46:27.756745  485986 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:46:27.756757  485986 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:46:27.756762  485986 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:46:27.756860  485986 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:46:27.756933  485986 ssh_runner.go:195] Run: crio config
	I1205 06:46:27.826615  485986 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:46:27.826635  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:27.826644  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:27.826657  485986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:46:27.826679  485986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:46:27.826795  485986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:46:27.826871  485986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:46:27.834649  485986 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:46:27.834712  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:46:27.842099  485986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:46:27.855421  485986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:46:27.868701  485986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1205 06:46:27.882058  485986 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:46:27.885936  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.995572  485986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:46:28.275034  485986 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:46:28.275045  485986 certs.go:195] generating shared ca certs ...
	I1205 06:46:28.275061  485986 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:46:28.275249  485986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:46:28.275292  485986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:46:28.275298  485986 certs.go:257] generating profile certs ...
	I1205 06:46:28.275410  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:46:28.275475  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:46:28.275515  485986 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:46:28.275644  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:46:28.275677  485986 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:46:28.275685  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:46:28.275720  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:46:28.275747  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:46:28.275784  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:46:28.275832  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:28.276503  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:46:28.298544  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:46:28.319289  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:46:28.339576  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:46:28.358300  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:46:28.376540  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:46:28.394872  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:46:28.412281  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:46:28.429993  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:46:28.447492  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:46:28.464800  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:46:28.482269  485986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:46:28.494984  485986 ssh_runner.go:195] Run: openssl version
	I1205 06:46:28.501339  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.508762  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:46:28.516382  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520092  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520163  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.563665  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:46:28.571080  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.578338  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:46:28.585799  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589656  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589716  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.631223  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:46:28.638732  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.646106  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:46:28.653539  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657103  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657161  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.698123  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:46:28.706515  485986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:46:28.710605  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:46:28.754183  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:46:28.798105  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:46:28.841637  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:46:28.883652  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:46:28.926486  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:46:28.968827  485986 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:28.968900  485986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:46:28.968973  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:28.995506  485986 cri.go:89] found id: ""
	I1205 06:46:28.995567  485986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:46:29.004262  485986 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:46:29.004281  485986 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:46:29.004345  485986 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:46:29.012409  485986 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.012971  485986 kubeconfig.go:125] found "functional-787602" server: "https://192.168.49.2:8441"
	I1205 06:46:29.014556  485986 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:46:29.022548  485986 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:31:50.409182079 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:46:27.876278809 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:46:29.022570  485986 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:46:29.022584  485986 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 06:46:29.022652  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:29.056958  485986 cri.go:89] found id: ""
	I1205 06:46:29.057019  485986 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:46:29.073934  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:46:29.081656  485986 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5623 Dec  5 06:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  5 06:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  5 06:35 /etc/kubernetes/scheduler.conf
	
	I1205 06:46:29.081722  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:46:29.089572  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:46:29.097486  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.097543  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:46:29.105088  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.112583  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.112639  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.120188  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:46:29.127909  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.127966  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:46:29.135508  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:46:29.143544  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:29.190973  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.485506  485986 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.294504309s)
	I1205 06:46:30.485577  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.689694  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.752398  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.798299  485986 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:46:30.798367  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.299303  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.799420  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.299360  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.798577  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.298564  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.799310  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.298783  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.799510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.299369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.799119  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.298663  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.798517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.299207  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.298684  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.798475  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.299188  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.799197  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.299101  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.798572  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.298546  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.298563  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.799429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.299246  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.298849  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.799336  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.298524  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.798566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.298926  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.298502  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.799392  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.298514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.299002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.798510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.298587  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.798531  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.298834  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.798937  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.298568  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.798738  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.298745  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.799302  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.298517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.799058  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.299228  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.798518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.298540  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.799439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.298489  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.798827  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.298721  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.799210  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.298539  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.798525  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.298844  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.799320  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.298437  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.799300  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.299120  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.799319  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.298499  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.799357  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.298718  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.799264  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.299497  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.799177  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.298596  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.798469  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.298441  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.798552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.299123  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.798514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.299549  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.799361  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.798490  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.299082  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.798506  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.298576  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.799316  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.298516  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.798581  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.298604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.799331  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.298518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.799198  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.298513  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.799043  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.298601  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.798562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.298562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.798978  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.298537  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.798570  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.298807  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.799307  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.298910  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.798961  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.299359  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.799509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.299086  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.798511  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.298495  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.799378  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.799258  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.298589  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.799234  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.299117  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.798575  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.299185  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.799188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:30.799265  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:30.824550  485986 cri.go:89] found id: ""
	I1205 06:47:30.824564  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.824571  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:30.824577  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:30.824640  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:30.851389  485986 cri.go:89] found id: ""
	I1205 06:47:30.851404  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.851412  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:30.851416  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:30.851473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:30.877392  485986 cri.go:89] found id: ""
	I1205 06:47:30.877406  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.877421  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:30.877425  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:30.877481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:30.902294  485986 cri.go:89] found id: ""
	I1205 06:47:30.902308  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.902315  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:30.902321  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:30.902431  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:30.938796  485986 cri.go:89] found id: ""
	I1205 06:47:30.938810  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.938818  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:30.938823  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:30.938888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:30.965100  485986 cri.go:89] found id: ""
	I1205 06:47:30.965114  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.965121  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:30.965127  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:30.965183  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:30.992646  485986 cri.go:89] found id: ""
	I1205 06:47:30.992661  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.992668  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:30.992676  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:30.992686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:31.063641  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:31.063661  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:31.081045  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:31.081060  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:31.156684  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:31.156698  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:31.156710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:31.237470  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:31.237495  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:33.770808  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:33.780812  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:33.780872  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:33.805688  485986 cri.go:89] found id: ""
	I1205 06:47:33.805701  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.805714  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:33.805719  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:33.805779  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:33.832478  485986 cri.go:89] found id: ""
	I1205 06:47:33.832492  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.832499  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:33.832504  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:33.832560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:33.857669  485986 cri.go:89] found id: ""
	I1205 06:47:33.857683  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.857690  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:33.857695  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:33.857750  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:33.883403  485986 cri.go:89] found id: ""
	I1205 06:47:33.883417  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.883426  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:33.883431  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:33.883490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:33.914197  485986 cri.go:89] found id: ""
	I1205 06:47:33.914212  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.914219  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:33.914224  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:33.914295  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:33.944924  485986 cri.go:89] found id: ""
	I1205 06:47:33.944938  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.944945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:33.944950  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:33.945007  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:33.973129  485986 cri.go:89] found id: ""
	I1205 06:47:33.973143  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.973151  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:33.973158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:33.973169  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:34.044761  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:34.044781  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:34.061807  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:34.061823  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:34.130826  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:34.130840  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:34.130851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:34.209603  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:34.209627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:36.743254  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:36.753733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:36.753810  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:36.779655  485986 cri.go:89] found id: ""
	I1205 06:47:36.779669  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.779676  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:36.779681  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:36.779738  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:36.805062  485986 cri.go:89] found id: ""
	I1205 06:47:36.805076  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.805083  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:36.805089  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:36.805152  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:36.830864  485986 cri.go:89] found id: ""
	I1205 06:47:36.830878  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.830886  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:36.830891  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:36.830961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:36.855729  485986 cri.go:89] found id: ""
	I1205 06:47:36.855749  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.855757  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:36.855762  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:36.855819  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:36.881068  485986 cri.go:89] found id: ""
	I1205 06:47:36.881082  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.881089  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:36.881094  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:36.881157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:36.909354  485986 cri.go:89] found id: ""
	I1205 06:47:36.909367  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.909374  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:36.909380  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:36.909450  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:36.939352  485986 cri.go:89] found id: ""
	I1205 06:47:36.939375  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.939388  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:36.939396  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:36.939407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:36.954937  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:36.954953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:37.027384  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:37.027396  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:37.027407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:37.108980  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:37.109004  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:37.137603  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:37.137620  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:39.704971  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:39.715073  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:39.715153  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:39.742797  485986 cri.go:89] found id: ""
	I1205 06:47:39.742811  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.742818  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:39.742823  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:39.742882  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:39.767795  485986 cri.go:89] found id: ""
	I1205 06:47:39.767809  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.767816  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:39.767821  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:39.767888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:39.793002  485986 cri.go:89] found id: ""
	I1205 06:47:39.793016  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.793023  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:39.793028  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:39.793108  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:39.819015  485986 cri.go:89] found id: ""
	I1205 06:47:39.819029  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.819036  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:39.819042  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:39.819098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:39.844387  485986 cri.go:89] found id: ""
	I1205 06:47:39.844401  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.844408  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:39.844413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:39.844487  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:39.871624  485986 cri.go:89] found id: ""
	I1205 06:47:39.871638  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.871644  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:39.871650  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:39.871721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:39.897731  485986 cri.go:89] found id: ""
	I1205 06:47:39.897746  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.897754  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:39.897761  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:39.897771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:39.962937  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:39.962949  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:39.962960  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:40.058236  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:40.058256  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:40.094003  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:40.094022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:40.167448  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:40.167468  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.685167  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:42.695150  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:42.695206  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:42.719880  485986 cri.go:89] found id: ""
	I1205 06:47:42.719893  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.719901  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:42.719906  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:42.719965  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:42.748922  485986 cri.go:89] found id: ""
	I1205 06:47:42.748936  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.748943  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:42.748949  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:42.749005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:42.778525  485986 cri.go:89] found id: ""
	I1205 06:47:42.778539  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.778546  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:42.778551  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:42.778610  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:42.804447  485986 cri.go:89] found id: ""
	I1205 06:47:42.804461  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.804468  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:42.804473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:42.804530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:42.829834  485986 cri.go:89] found id: ""
	I1205 06:47:42.829848  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.829855  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:42.829861  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:42.829917  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:42.861917  485986 cri.go:89] found id: ""
	I1205 06:47:42.861937  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.861945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:42.861951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:42.862011  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:42.889024  485986 cri.go:89] found id: ""
	I1205 06:47:42.889047  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.889055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:42.889063  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:42.889073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:42.954442  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:42.954462  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.969793  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:42.969810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:43.044093  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:43.044113  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:43.044124  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:43.137811  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:43.137841  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.667791  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:45.677638  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:45.677697  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:45.702199  485986 cri.go:89] found id: ""
	I1205 06:47:45.702213  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.702220  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:45.702226  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:45.702284  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:45.726622  485986 cri.go:89] found id: ""
	I1205 06:47:45.726635  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.726642  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:45.726647  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:45.726703  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:45.752464  485986 cri.go:89] found id: ""
	I1205 06:47:45.752477  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.752484  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:45.752489  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:45.752551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:45.777756  485986 cri.go:89] found id: ""
	I1205 06:47:45.777770  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.777777  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:45.777783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:45.777838  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:45.803428  485986 cri.go:89] found id: ""
	I1205 06:47:45.803443  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.803459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:45.803464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:45.803524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:45.829175  485986 cri.go:89] found id: ""
	I1205 06:47:45.829189  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.829196  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:45.829201  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:45.829260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:45.855195  485986 cri.go:89] found id: ""
	I1205 06:47:45.855210  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.855217  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:45.855224  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:45.855235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.887261  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:45.887277  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:45.952635  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:45.952655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:45.968248  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:45.968265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:46.039946  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:46.039964  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:46.039975  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:48.631039  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:48.641171  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:48.641231  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:48.666360  485986 cri.go:89] found id: ""
	I1205 06:47:48.666402  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.666409  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:48.666417  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:48.666473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:48.694222  485986 cri.go:89] found id: ""
	I1205 06:47:48.694237  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.694243  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:48.694249  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:48.694304  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:48.718984  485986 cri.go:89] found id: ""
	I1205 06:47:48.718998  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.719005  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:48.719010  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:48.719067  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:48.744169  485986 cri.go:89] found id: ""
	I1205 06:47:48.744183  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.744190  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:48.744195  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:48.744253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:48.769240  485986 cri.go:89] found id: ""
	I1205 06:47:48.769263  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.769270  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:48.769275  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:48.769341  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:48.798956  485986 cri.go:89] found id: ""
	I1205 06:47:48.798971  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.798978  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:48.798983  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:48.799044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:48.826195  485986 cri.go:89] found id: ""
	I1205 06:47:48.826209  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.826216  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:48.826223  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:48.826233  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:48.892751  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:48.892771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:48.908154  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:48.908171  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:48.975550  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:48.975561  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:48.975572  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:49.057631  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:49.057651  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.594813  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:51.606364  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:51.606436  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:51.636377  485986 cri.go:89] found id: ""
	I1205 06:47:51.636391  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.636398  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:51.636403  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:51.636464  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:51.662318  485986 cri.go:89] found id: ""
	I1205 06:47:51.662332  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.662338  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:51.662349  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:51.662430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:51.688886  485986 cri.go:89] found id: ""
	I1205 06:47:51.688900  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.688907  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:51.688911  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:51.688969  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:51.717982  485986 cri.go:89] found id: ""
	I1205 06:47:51.717996  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.718003  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:51.718008  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:51.718066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:51.744748  485986 cri.go:89] found id: ""
	I1205 06:47:51.744762  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.744769  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:51.744783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:51.744840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:51.769889  485986 cri.go:89] found id: ""
	I1205 06:47:51.769903  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.769909  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:51.769915  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:51.769970  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:51.797012  485986 cri.go:89] found id: ""
	I1205 06:47:51.797026  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.797033  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:51.797040  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:51.797050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:51.871624  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:51.871643  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.901592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:51.901609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:51.968311  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:51.968333  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:51.983733  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:51.983748  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:52.057625  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.557903  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:54.568103  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:54.568164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:54.597086  485986 cri.go:89] found id: ""
	I1205 06:47:54.597100  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.597107  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:54.597112  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:54.597168  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:54.622728  485986 cri.go:89] found id: ""
	I1205 06:47:54.622743  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.622750  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:54.622756  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:54.622812  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:54.646642  485986 cri.go:89] found id: ""
	I1205 06:47:54.646656  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.646663  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:54.646668  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:54.646723  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:54.671271  485986 cri.go:89] found id: ""
	I1205 06:47:54.671286  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.671293  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:54.671299  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:54.671355  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:54.696124  485986 cri.go:89] found id: ""
	I1205 06:47:54.696138  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.696150  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:54.696155  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:54.696210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:54.720362  485986 cri.go:89] found id: ""
	I1205 06:47:54.720375  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.720383  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:54.720388  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:54.720442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:54.754080  485986 cri.go:89] found id: ""
	I1205 06:47:54.754094  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.754101  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:54.754108  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:54.754121  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:54.820260  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:54.820281  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:54.836201  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:54.836217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:54.909051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.909069  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:54.909080  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:54.984892  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:54.984912  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:57.516912  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:57.527633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:57.527698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:57.553823  485986 cri.go:89] found id: ""
	I1205 06:47:57.553837  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.553844  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:57.553851  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:57.553924  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:57.581054  485986 cri.go:89] found id: ""
	I1205 06:47:57.581068  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.581075  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:57.581080  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:57.581139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:57.606438  485986 cri.go:89] found id: ""
	I1205 06:47:57.606452  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.606460  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:57.606465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:57.606522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:57.632199  485986 cri.go:89] found id: ""
	I1205 06:47:57.632214  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.632220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:57.632226  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:57.632285  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:57.661439  485986 cri.go:89] found id: ""
	I1205 06:47:57.661454  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.661460  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:57.661465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:57.661521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:57.690916  485986 cri.go:89] found id: ""
	I1205 06:47:57.690930  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.690937  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:57.690943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:57.691003  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:57.716612  485986 cri.go:89] found id: ""
	I1205 06:47:57.716625  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.716632  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:57.716640  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:57.716650  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:57.787213  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:57.787235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:57.802362  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:57.802400  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:57.864350  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:57.864360  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:57.864370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:57.941328  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:57.941349  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:00.470137  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:00.483635  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:00.483706  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:00.512315  485986 cri.go:89] found id: ""
	I1205 06:48:00.512330  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.512338  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:00.512345  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:00.512409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:00.546442  485986 cri.go:89] found id: ""
	I1205 06:48:00.546457  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.546464  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:00.546469  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:00.546530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:00.573096  485986 cri.go:89] found id: ""
	I1205 06:48:00.573110  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.573123  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:00.573128  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:00.573187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:00.603254  485986 cri.go:89] found id: ""
	I1205 06:48:00.603268  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.603275  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:00.603280  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:00.603337  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:00.633558  485986 cri.go:89] found id: ""
	I1205 06:48:00.633572  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.633579  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:00.633586  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:00.633651  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:00.660790  485986 cri.go:89] found id: ""
	I1205 06:48:00.660804  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.660810  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:00.660816  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:00.660874  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:00.688773  485986 cri.go:89] found id: ""
	I1205 06:48:00.688786  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.688793  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:00.688800  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:00.688811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:00.753427  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:00.753450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:00.768529  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:00.768545  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:00.832028  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:00.832038  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:00.832048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:00.909664  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:00.909686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:03.440768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:03.451151  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:03.451210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:03.475560  485986 cri.go:89] found id: ""
	I1205 06:48:03.475574  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.475580  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:03.475586  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:03.475657  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:03.500266  485986 cri.go:89] found id: ""
	I1205 06:48:03.500280  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.500286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:03.500291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:03.500350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:03.528908  485986 cri.go:89] found id: ""
	I1205 06:48:03.528921  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.528928  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:03.528933  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:03.528993  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:03.556882  485986 cri.go:89] found id: ""
	I1205 06:48:03.556896  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.556903  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:03.556908  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:03.556963  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:03.582231  485986 cri.go:89] found id: ""
	I1205 06:48:03.582244  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.582252  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:03.582257  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:03.582315  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:03.611645  485986 cri.go:89] found id: ""
	I1205 06:48:03.611658  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.611665  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:03.611670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:03.611732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:03.637034  485986 cri.go:89] found id: ""
	I1205 06:48:03.637048  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.637055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:03.637062  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:03.637072  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:03.703283  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:03.703305  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:03.718166  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:03.718182  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:03.784612  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:03.784623  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:03.784645  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:03.865840  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:03.865871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.395611  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:06.406190  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:06.406253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:06.431958  485986 cri.go:89] found id: ""
	I1205 06:48:06.431972  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.431979  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:06.431984  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:06.432047  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:06.457302  485986 cri.go:89] found id: ""
	I1205 06:48:06.457317  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.457324  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:06.457329  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:06.457391  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:06.482778  485986 cri.go:89] found id: ""
	I1205 06:48:06.482793  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.482799  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:06.482805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:06.482860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:06.508293  485986 cri.go:89] found id: ""
	I1205 06:48:06.508307  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.508314  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:06.508319  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:06.508457  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:06.537089  485986 cri.go:89] found id: ""
	I1205 06:48:06.537103  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.537110  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:06.537115  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:06.537175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:06.564731  485986 cri.go:89] found id: ""
	I1205 06:48:06.564745  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.564752  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:06.564759  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:06.564815  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:06.590872  485986 cri.go:89] found id: ""
	I1205 06:48:06.590887  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.590895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:06.590903  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:06.590914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:06.658481  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:06.658495  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:06.658505  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:06.733300  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:06.733322  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.768591  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:06.768606  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:06.834509  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:06.834529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.350677  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:09.360723  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:09.360783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:09.388219  485986 cri.go:89] found id: ""
	I1205 06:48:09.388232  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.388239  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:09.388244  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:09.388306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:09.416992  485986 cri.go:89] found id: ""
	I1205 06:48:09.417007  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.417013  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:09.417019  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:09.417076  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:09.446304  485986 cri.go:89] found id: ""
	I1205 06:48:09.446318  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.446325  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:09.446330  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:09.446409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:09.472368  485986 cri.go:89] found id: ""
	I1205 06:48:09.472383  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.472390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:09.472395  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:09.472474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:09.497702  485986 cri.go:89] found id: ""
	I1205 06:48:09.497716  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.497722  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:09.497727  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:09.497783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:09.525679  485986 cri.go:89] found id: ""
	I1205 06:48:09.525693  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.525700  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:09.525706  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:09.525765  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:09.552628  485986 cri.go:89] found id: ""
	I1205 06:48:09.552643  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.552650  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:09.552657  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:09.552667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:09.618085  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:09.618105  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.633067  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:09.633084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:09.696615  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:09.696626  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:09.696637  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:09.772055  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:09.772074  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.303940  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:12.314229  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:12.314298  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:12.348459  485986 cri.go:89] found id: ""
	I1205 06:48:12.348473  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.348480  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:12.348485  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:12.348543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:12.373284  485986 cri.go:89] found id: ""
	I1205 06:48:12.373299  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.373306  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:12.373311  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:12.373375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:12.398539  485986 cri.go:89] found id: ""
	I1205 06:48:12.398559  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.398566  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:12.398571  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:12.398635  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:12.423138  485986 cri.go:89] found id: ""
	I1205 06:48:12.423151  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.423158  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:12.423163  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:12.423223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:12.447667  485986 cri.go:89] found id: ""
	I1205 06:48:12.447680  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.447688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:12.447692  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:12.447751  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:12.472343  485986 cri.go:89] found id: ""
	I1205 06:48:12.472357  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.472364  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:12.472369  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:12.472425  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:12.497076  485986 cri.go:89] found id: ""
	I1205 06:48:12.497089  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.497096  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:12.497102  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:12.497112  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:12.574451  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:12.574470  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.610910  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:12.610926  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:12.678117  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:12.678135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:12.692476  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:12.692492  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:12.758359  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.258636  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:15.270043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:15.270103  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:15.304750  485986 cri.go:89] found id: ""
	I1205 06:48:15.304764  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.304771  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:15.304776  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:15.304832  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:15.344151  485986 cri.go:89] found id: ""
	I1205 06:48:15.344165  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.344172  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:15.344182  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:15.344249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:15.371527  485986 cri.go:89] found id: ""
	I1205 06:48:15.371541  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.371548  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:15.371553  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:15.371618  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:15.403495  485986 cri.go:89] found id: ""
	I1205 06:48:15.403508  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.403515  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:15.403521  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:15.403581  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:15.429409  485986 cri.go:89] found id: ""
	I1205 06:48:15.429424  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.429431  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:15.429436  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:15.429501  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:15.459234  485986 cri.go:89] found id: ""
	I1205 06:48:15.459248  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.459257  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:15.459263  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:15.459320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:15.488886  485986 cri.go:89] found id: ""
	I1205 06:48:15.488900  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.488907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:15.488915  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:15.488925  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:15.556219  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:15.556239  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:15.571562  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:15.571579  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:15.635494  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.635504  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:15.635514  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:15.717719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:15.717740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.253466  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:18.263430  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:18.263491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:18.305028  485986 cri.go:89] found id: ""
	I1205 06:48:18.305042  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.305049  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:18.305054  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:18.305111  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:18.332689  485986 cri.go:89] found id: ""
	I1205 06:48:18.332702  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.332709  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:18.332715  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:18.332770  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:18.360205  485986 cri.go:89] found id: ""
	I1205 06:48:18.360220  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.360227  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:18.360232  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:18.360291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:18.385479  485986 cri.go:89] found id: ""
	I1205 06:48:18.385493  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.385500  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:18.385505  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:18.385560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:18.413258  485986 cri.go:89] found id: ""
	I1205 06:48:18.413272  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.413279  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:18.413286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:18.413348  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:18.439018  485986 cri.go:89] found id: ""
	I1205 06:48:18.439032  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.439039  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:18.439044  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:18.439099  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:18.465311  485986 cri.go:89] found id: ""
	I1205 06:48:18.465324  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.465341  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:18.465348  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:18.465359  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:18.479885  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:18.479902  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:18.543997  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:18.544007  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:18.544018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:18.620924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:18.620948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.655034  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:18.655050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.222770  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:21.233411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:21.233478  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:21.263287  485986 cri.go:89] found id: ""
	I1205 06:48:21.263302  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.263309  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:21.263315  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:21.263379  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:21.300915  485986 cri.go:89] found id: ""
	I1205 06:48:21.300929  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.300936  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:21.300941  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:21.301005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:21.328975  485986 cri.go:89] found id: ""
	I1205 06:48:21.328989  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.328999  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:21.329004  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:21.329061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:21.358828  485986 cri.go:89] found id: ""
	I1205 06:48:21.358842  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.358849  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:21.358854  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:21.358914  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:21.384401  485986 cri.go:89] found id: ""
	I1205 06:48:21.384422  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.384429  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:21.384434  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:21.384491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:21.409705  485986 cri.go:89] found id: ""
	I1205 06:48:21.409719  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.409726  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:21.409732  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:21.409791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:21.437633  485986 cri.go:89] found id: ""
	I1205 06:48:21.437650  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.437658  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:21.437665  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:21.437675  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:21.515785  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:21.515808  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:21.549019  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:21.549035  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.620027  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:21.620048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:21.635622  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:21.635638  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:21.710252  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.210507  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:24.221002  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:24.221061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:24.246259  485986 cri.go:89] found id: ""
	I1205 06:48:24.246273  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.246280  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:24.246285  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:24.246350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:24.274723  485986 cri.go:89] found id: ""
	I1205 06:48:24.274736  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.274743  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:24.274749  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:24.274807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:24.312165  485986 cri.go:89] found id: ""
	I1205 06:48:24.312179  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.312186  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:24.312191  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:24.312248  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:24.351913  485986 cri.go:89] found id: ""
	I1205 06:48:24.351927  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.351934  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:24.351939  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:24.351995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:24.377944  485986 cri.go:89] found id: ""
	I1205 06:48:24.377958  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.377966  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:24.377971  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:24.378029  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:24.403127  485986 cri.go:89] found id: ""
	I1205 06:48:24.403142  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.403149  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:24.403154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:24.403211  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:24.428745  485986 cri.go:89] found id: ""
	I1205 06:48:24.428760  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.428777  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:24.428785  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:24.428795  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:24.495838  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:24.495860  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:24.511294  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:24.511309  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:24.577637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.577647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:24.577658  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:24.664395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:24.664422  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:27.196552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:27.206670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:27.206729  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:27.232859  485986 cri.go:89] found id: ""
	I1205 06:48:27.232873  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.232880  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:27.232885  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:27.232944  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:27.261077  485986 cri.go:89] found id: ""
	I1205 06:48:27.261091  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.261098  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:27.261104  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:27.261157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:27.299035  485986 cri.go:89] found id: ""
	I1205 06:48:27.299049  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.299056  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:27.299061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:27.299117  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:27.325080  485986 cri.go:89] found id: ""
	I1205 06:48:27.325094  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.325100  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:27.325105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:27.325165  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:27.355194  485986 cri.go:89] found id: ""
	I1205 06:48:27.355208  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.355215  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:27.355220  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:27.355281  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:27.380260  485986 cri.go:89] found id: ""
	I1205 06:48:27.380274  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.380281  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:27.380286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:27.380340  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:27.404746  485986 cri.go:89] found id: ""
	I1205 06:48:27.404760  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.404767  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:27.404774  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:27.404784  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:27.471214  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:27.471234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:27.486196  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:27.486213  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:27.549013  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:27.549023  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:27.549034  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:27.626719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:27.626740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.157779  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:30.168828  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:30.168888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:30.196472  485986 cri.go:89] found id: ""
	I1205 06:48:30.196487  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.196494  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:30.196500  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:30.196561  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:30.222435  485986 cri.go:89] found id: ""
	I1205 06:48:30.222449  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.222456  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:30.222463  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:30.222521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:30.252893  485986 cri.go:89] found id: ""
	I1205 06:48:30.252907  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.252914  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:30.252919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:30.252979  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:30.293703  485986 cri.go:89] found id: ""
	I1205 06:48:30.293717  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.293724  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:30.293729  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:30.293791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:30.323711  485986 cri.go:89] found id: ""
	I1205 06:48:30.323724  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.323731  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:30.323746  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:30.323804  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:30.355817  485986 cri.go:89] found id: ""
	I1205 06:48:30.355831  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.355838  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:30.355844  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:30.355905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:30.384820  485986 cri.go:89] found id: ""
	I1205 06:48:30.384834  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.384850  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:30.384858  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:30.384869  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:30.400554  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:30.400571  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:30.462509  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:30.462519  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:30.462529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:30.539861  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:30.539884  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.572611  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:30.572627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.142900  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:33.153456  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:33.153522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:33.178913  485986 cri.go:89] found id: ""
	I1205 06:48:33.178926  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.178933  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:33.178939  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:33.178994  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:33.204173  485986 cri.go:89] found id: ""
	I1205 06:48:33.204187  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.204195  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:33.204200  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:33.204260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:33.231661  485986 cri.go:89] found id: ""
	I1205 06:48:33.231675  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.231688  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:33.231693  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:33.231749  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:33.256100  485986 cri.go:89] found id: ""
	I1205 06:48:33.256113  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.256120  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:33.256125  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:33.256180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:33.288692  485986 cri.go:89] found id: ""
	I1205 06:48:33.288706  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.288713  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:33.288718  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:33.288778  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:33.322902  485986 cri.go:89] found id: ""
	I1205 06:48:33.322916  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.322931  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:33.322936  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:33.322995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:33.354832  485986 cri.go:89] found id: ""
	I1205 06:48:33.354846  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.354853  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:33.354861  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:33.354871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.419523  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:33.419542  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:33.436533  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:33.436549  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:33.500717  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:33.500727  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:33.500744  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:33.576166  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:33.576187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.103891  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:36.114026  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:36.114086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:36.138404  485986 cri.go:89] found id: ""
	I1205 06:48:36.138419  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.138426  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:36.138432  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:36.138490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:36.165135  485986 cri.go:89] found id: ""
	I1205 06:48:36.165149  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.165156  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:36.165161  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:36.165218  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:36.190238  485986 cri.go:89] found id: ""
	I1205 06:48:36.190252  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.190259  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:36.190264  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:36.190323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:36.216962  485986 cri.go:89] found id: ""
	I1205 06:48:36.216975  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.216982  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:36.216987  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:36.217043  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:36.241075  485986 cri.go:89] found id: ""
	I1205 06:48:36.241089  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.241096  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:36.241107  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:36.241174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:36.267257  485986 cri.go:89] found id: ""
	I1205 06:48:36.267272  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.267278  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:36.267284  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:36.267350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:36.293288  485986 cri.go:89] found id: ""
	I1205 06:48:36.293310  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.293320  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:36.293327  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:36.293338  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:36.363749  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:36.363759  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:36.363769  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:36.438180  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:36.438203  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.466903  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:36.466919  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:36.532968  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:36.532989  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.048421  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:39.059045  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:39.059109  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:39.083511  485986 cri.go:89] found id: ""
	I1205 06:48:39.083526  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.083532  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:39.083537  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:39.083599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:39.107712  485986 cri.go:89] found id: ""
	I1205 06:48:39.107725  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.107732  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:39.107736  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:39.107793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:39.132566  485986 cri.go:89] found id: ""
	I1205 06:48:39.132580  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.132588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:39.132593  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:39.132650  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:39.161417  485986 cri.go:89] found id: ""
	I1205 06:48:39.161431  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.161438  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:39.161443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:39.161511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:39.186314  485986 cri.go:89] found id: ""
	I1205 06:48:39.186328  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.186335  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:39.186340  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:39.186428  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:39.210957  485986 cri.go:89] found id: ""
	I1205 06:48:39.210971  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.210980  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:39.210986  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:39.211044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:39.236120  485986 cri.go:89] found id: ""
	I1205 06:48:39.236134  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.236141  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:39.236148  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:39.236159  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.250894  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:39.250911  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:39.334545  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:39.334556  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:39.334567  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:39.413949  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:39.413970  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:39.444354  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:39.444370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.015174  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:42.026667  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:42.026732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:42.056643  485986 cri.go:89] found id: ""
	I1205 06:48:42.056658  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.056666  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:42.056672  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:42.056732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:42.084714  485986 cri.go:89] found id: ""
	I1205 06:48:42.084731  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.084745  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:42.084750  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:42.084817  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:42.115735  485986 cri.go:89] found id: ""
	I1205 06:48:42.115750  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.115757  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:42.115763  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:42.115828  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:42.148687  485986 cri.go:89] found id: ""
	I1205 06:48:42.148703  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.148711  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:42.148717  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:42.148783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:42.183060  485986 cri.go:89] found id: ""
	I1205 06:48:42.183076  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.183084  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:42.183089  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:42.183162  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:42.216582  485986 cri.go:89] found id: ""
	I1205 06:48:42.216598  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.216606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:42.216612  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:42.216684  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:42.247171  485986 cri.go:89] found id: ""
	I1205 06:48:42.247186  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.247193  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:42.247201  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:42.247217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:42.285459  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:42.285487  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.355504  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:42.355523  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:42.370693  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:42.370709  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:42.438568  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:42.438578  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:42.438588  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.014965  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:45.054270  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:45.054339  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:45.114057  485986 cri.go:89] found id: ""
	I1205 06:48:45.114075  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.114090  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:45.114097  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:45.114172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:45.165369  485986 cri.go:89] found id: ""
	I1205 06:48:45.165394  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.165402  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:45.165408  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:45.165494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:45.212325  485986 cri.go:89] found id: ""
	I1205 06:48:45.212342  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.212349  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:45.212355  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:45.212424  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:45.254096  485986 cri.go:89] found id: ""
	I1205 06:48:45.254114  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.254127  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:45.254134  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:45.254294  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:45.305666  485986 cri.go:89] found id: ""
	I1205 06:48:45.305681  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.305688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:45.305694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:45.305753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:45.347701  485986 cri.go:89] found id: ""
	I1205 06:48:45.347715  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.347721  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:45.347726  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:45.347793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:45.373745  485986 cri.go:89] found id: ""
	I1205 06:48:45.373760  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.373775  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:45.373782  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:45.373793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:45.439756  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:45.439776  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:45.454781  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:45.454797  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:45.521815  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:45.521826  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:45.521838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.602427  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:45.602455  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.134541  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:48.144703  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:48.144768  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:48.169929  485986 cri.go:89] found id: ""
	I1205 06:48:48.169942  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.169949  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:48.169954  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:48.170014  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:48.194815  485986 cri.go:89] found id: ""
	I1205 06:48:48.194828  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.194835  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:48.194840  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:48.194898  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:48.220017  485986 cri.go:89] found id: ""
	I1205 06:48:48.220031  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.220038  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:48.220043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:48.220101  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:48.249449  485986 cri.go:89] found id: ""
	I1205 06:48:48.249462  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.249470  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:48.249481  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:48.249552  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:48.284921  485986 cri.go:89] found id: ""
	I1205 06:48:48.284935  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.284942  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:48.284947  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:48.285006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:48.315138  485986 cri.go:89] found id: ""
	I1205 06:48:48.315152  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.315159  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:48.315164  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:48.315223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:48.347265  485986 cri.go:89] found id: ""
	I1205 06:48:48.347279  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.347286  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:48.347293  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:48.347304  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.375662  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:48.375678  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:48.440841  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:48.440863  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:48.456128  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:48.456144  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:48.523196  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:48.523206  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:48.523216  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.100852  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:51.111413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:51.111475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:51.139392  485986 cri.go:89] found id: ""
	I1205 06:48:51.139406  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.139414  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:51.139419  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:51.139483  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:51.167265  485986 cri.go:89] found id: ""
	I1205 06:48:51.167279  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.167286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:51.167291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:51.167347  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:51.192337  485986 cri.go:89] found id: ""
	I1205 06:48:51.192351  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.192358  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:51.192363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:51.192419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:51.217599  485986 cri.go:89] found id: ""
	I1205 06:48:51.217614  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.217621  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:51.217627  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:51.217683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:51.242555  485986 cri.go:89] found id: ""
	I1205 06:48:51.242568  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.242576  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:51.242580  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:51.242641  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:51.270447  485986 cri.go:89] found id: ""
	I1205 06:48:51.270462  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.270469  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:51.270474  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:51.270551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:51.300340  485986 cri.go:89] found id: ""
	I1205 06:48:51.300353  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.300360  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:51.300375  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:51.300385  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:51.373583  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:51.373604  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:51.388609  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:51.388624  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:51.449562  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:51.449572  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:51.449584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.523352  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:51.523373  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.052404  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:54.065168  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:54.065280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:54.097086  485986 cri.go:89] found id: ""
	I1205 06:48:54.097102  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.097109  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:54.097114  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:54.097173  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:54.128973  485986 cri.go:89] found id: ""
	I1205 06:48:54.128988  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.128995  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:54.129000  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:54.129066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:54.163279  485986 cri.go:89] found id: ""
	I1205 06:48:54.163294  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.163301  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:54.163305  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:54.163363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:54.200034  485986 cri.go:89] found id: ""
	I1205 06:48:54.200049  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.200056  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:54.200061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:54.200119  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:54.232483  485986 cri.go:89] found id: ""
	I1205 06:48:54.232498  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.232504  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:54.232509  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:54.232572  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:54.256577  485986 cri.go:89] found id: ""
	I1205 06:48:54.256598  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.256606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:54.256611  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:54.256673  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:54.288762  485986 cri.go:89] found id: ""
	I1205 06:48:54.288788  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.288796  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:54.288804  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:54.288815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:54.368738  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:54.368758  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.395932  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:54.395948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:54.464047  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:54.464066  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:54.479400  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:54.479416  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:54.546819  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.047675  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:57.058076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:57.058143  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:57.082332  485986 cri.go:89] found id: ""
	I1205 06:48:57.082347  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.082355  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:57.082360  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:57.082442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:57.108051  485986 cri.go:89] found id: ""
	I1205 06:48:57.108071  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.108078  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:57.108083  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:57.108139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:57.137107  485986 cri.go:89] found id: ""
	I1205 06:48:57.137129  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.137136  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:57.137141  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:57.137198  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:57.163240  485986 cri.go:89] found id: ""
	I1205 06:48:57.163272  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.163279  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:57.163285  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:57.163352  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:57.192699  485986 cri.go:89] found id: ""
	I1205 06:48:57.192725  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.192735  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:57.192740  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:57.192807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:57.220916  485986 cri.go:89] found id: ""
	I1205 06:48:57.220931  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.220938  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:57.220943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:57.221010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:57.248028  485986 cri.go:89] found id: ""
	I1205 06:48:57.248042  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.248049  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:57.248057  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:57.248068  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:57.262955  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:57.262971  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:57.355127  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.355141  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:57.355151  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:57.433116  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:57.433135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:57.464587  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:57.464603  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.033434  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:00.083145  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:00.083219  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:00.188573  485986 cri.go:89] found id: ""
	I1205 06:49:00.188591  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.188607  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:00.188613  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:00.188683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:00.262241  485986 cri.go:89] found id: ""
	I1205 06:49:00.262258  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.262265  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:00.262271  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:00.262346  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:00.303849  485986 cri.go:89] found id: ""
	I1205 06:49:00.303866  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.303875  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:00.303881  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:00.303981  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:00.349047  485986 cri.go:89] found id: ""
	I1205 06:49:00.349063  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.349071  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:00.349076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:00.349147  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:00.379299  485986 cri.go:89] found id: ""
	I1205 06:49:00.379317  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.379325  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:00.379332  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:00.379419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:00.409559  485986 cri.go:89] found id: ""
	I1205 06:49:00.409575  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.409582  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:00.409589  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:00.409656  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:00.439884  485986 cri.go:89] found id: ""
	I1205 06:49:00.439899  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.439907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:00.439916  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:00.439933  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.508652  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:00.508672  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:00.524482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:00.524504  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:00.586066  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:00.586076  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:00.586087  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:00.663208  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:00.663229  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:03.193638  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:03.204025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:03.204086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:03.228565  485986 cri.go:89] found id: ""
	I1205 06:49:03.228579  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.228586  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:03.228592  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:03.228649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:03.254850  485986 cri.go:89] found id: ""
	I1205 06:49:03.254864  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.254871  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:03.254876  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:03.254937  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:03.289088  485986 cri.go:89] found id: ""
	I1205 06:49:03.289101  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.289108  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:03.289113  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:03.289194  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:03.322876  485986 cri.go:89] found id: ""
	I1205 06:49:03.322891  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.322905  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:03.322910  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:03.322971  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:03.352868  485986 cri.go:89] found id: ""
	I1205 06:49:03.352883  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.352890  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:03.352895  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:03.352957  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:03.381474  485986 cri.go:89] found id: ""
	I1205 06:49:03.381495  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.381502  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:03.381508  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:03.381569  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:03.410037  485986 cri.go:89] found id: ""
	I1205 06:49:03.410051  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.410058  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:03.410071  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:03.410081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:03.479009  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:03.479028  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:03.493685  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:03.493702  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:03.561170  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:03.561179  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:03.561190  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:03.638291  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:03.638315  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.175002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:06.185259  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:06.185319  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:06.215092  485986 cri.go:89] found id: ""
	I1205 06:49:06.215106  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.215113  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:06.215119  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:06.215175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:06.245195  485986 cri.go:89] found id: ""
	I1205 06:49:06.245209  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.245216  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:06.245221  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:06.245283  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:06.271319  485986 cri.go:89] found id: ""
	I1205 06:49:06.271333  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.271340  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:06.271346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:06.271404  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:06.300132  485986 cri.go:89] found id: ""
	I1205 06:49:06.300146  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.300152  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:06.300158  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:06.300216  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:06.337931  485986 cri.go:89] found id: ""
	I1205 06:49:06.337945  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.337952  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:06.337957  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:06.338017  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:06.365963  485986 cri.go:89] found id: ""
	I1205 06:49:06.365978  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.365985  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:06.365991  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:06.366048  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:06.396366  485986 cri.go:89] found id: ""
	I1205 06:49:06.396382  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.396389  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:06.396397  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:06.396410  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.424940  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:06.424956  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:06.490847  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:06.490864  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:06.506209  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:06.506225  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:06.572331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:06.572342  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:06.572352  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.157509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:09.167469  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:09.167529  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:09.192289  485986 cri.go:89] found id: ""
	I1205 06:49:09.192304  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.192311  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:09.192316  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:09.192375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:09.217082  485986 cri.go:89] found id: ""
	I1205 06:49:09.217096  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.217103  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:09.217108  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:09.217167  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:09.242357  485986 cri.go:89] found id: ""
	I1205 06:49:09.242371  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.242412  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:09.242417  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:09.242474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:09.267197  485986 cri.go:89] found id: ""
	I1205 06:49:09.267211  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.267218  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:09.267223  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:09.267282  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:09.302740  485986 cri.go:89] found id: ""
	I1205 06:49:09.302754  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.302761  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:09.302766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:09.302824  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:09.338883  485986 cri.go:89] found id: ""
	I1205 06:49:09.338910  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.338917  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:09.338923  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:09.338988  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:09.365834  485986 cri.go:89] found id: ""
	I1205 06:49:09.365848  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.365855  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:09.365862  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:09.365872  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:09.433408  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:09.433430  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:09.448763  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:09.448785  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:09.510400  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:09.510413  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:09.510424  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.589135  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:09.589155  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:12.118439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:12.128584  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:12.128642  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:12.153045  485986 cri.go:89] found id: ""
	I1205 06:49:12.153059  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.153066  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:12.153071  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:12.153138  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:12.181785  485986 cri.go:89] found id: ""
	I1205 06:49:12.181798  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.181805  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:12.181810  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:12.181867  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:12.208813  485986 cri.go:89] found id: ""
	I1205 06:49:12.208827  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.208834  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:12.208845  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:12.208903  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:12.234917  485986 cri.go:89] found id: ""
	I1205 06:49:12.234931  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.234938  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:12.234943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:12.235004  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:12.260438  485986 cri.go:89] found id: ""
	I1205 06:49:12.260452  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.260459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:12.260464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:12.260531  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:12.296968  485986 cri.go:89] found id: ""
	I1205 06:49:12.296981  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.296988  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:12.296994  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:12.297050  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:12.333915  485986 cri.go:89] found id: ""
	I1205 06:49:12.333929  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.333936  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:12.333943  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:12.333953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:12.406977  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:12.406998  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:12.422290  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:12.422306  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:12.488646  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:12.488656  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:12.488666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:12.564028  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:12.564050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.095313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:15.105802  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:15.105864  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:15.133035  485986 cri.go:89] found id: ""
	I1205 06:49:15.133049  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.133057  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:15.133062  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:15.133118  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:15.158425  485986 cri.go:89] found id: ""
	I1205 06:49:15.158439  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.158446  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:15.158451  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:15.158507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:15.183550  485986 cri.go:89] found id: ""
	I1205 06:49:15.183564  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.183571  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:15.183576  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:15.183637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:15.209390  485986 cri.go:89] found id: ""
	I1205 06:49:15.209405  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.209413  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:15.209418  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:15.209481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:15.234806  485986 cri.go:89] found id: ""
	I1205 06:49:15.234820  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.234828  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:15.234833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:15.234893  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:15.260606  485986 cri.go:89] found id: ""
	I1205 06:49:15.260621  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.260628  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:15.260633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:15.260689  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:15.291752  485986 cri.go:89] found id: ""
	I1205 06:49:15.291766  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.291773  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:15.291782  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:15.291793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:15.308482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:15.308499  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:15.380232  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:15.380242  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:15.380253  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:15.456924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:15.456947  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.486075  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:15.486091  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.055175  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:18.065657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:18.065716  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:18.092418  485986 cri.go:89] found id: ""
	I1205 06:49:18.092432  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.092440  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:18.092445  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:18.092504  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:18.119095  485986 cri.go:89] found id: ""
	I1205 06:49:18.119109  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.119116  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:18.119120  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:18.119174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:18.158317  485986 cri.go:89] found id: ""
	I1205 06:49:18.158331  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.158338  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:18.158343  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:18.158435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:18.182920  485986 cri.go:89] found id: ""
	I1205 06:49:18.182934  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.182941  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:18.182946  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:18.183006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:18.209415  485986 cri.go:89] found id: ""
	I1205 06:49:18.209430  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.209438  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:18.209443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:18.209512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:18.236631  485986 cri.go:89] found id: ""
	I1205 06:49:18.236644  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.236651  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:18.236656  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:18.236713  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:18.262726  485986 cri.go:89] found id: ""
	I1205 06:49:18.262740  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.262747  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:18.262754  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:18.262765  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.339996  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:18.340018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:18.358676  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:18.358696  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:18.426638  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:18.426647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:18.426706  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:18.504263  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:18.504284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:21.036369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:21.046428  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:21.046488  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:21.071147  485986 cri.go:89] found id: ""
	I1205 06:49:21.071161  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.071168  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:21.071173  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:21.071235  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:21.095397  485986 cri.go:89] found id: ""
	I1205 06:49:21.095412  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.095421  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:21.095426  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:21.095485  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:21.119759  485986 cri.go:89] found id: ""
	I1205 06:49:21.119773  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.119780  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:21.119786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:21.119850  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:21.144972  485986 cri.go:89] found id: ""
	I1205 06:49:21.144986  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.144993  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:21.144998  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:21.145054  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:21.170022  485986 cri.go:89] found id: ""
	I1205 06:49:21.170035  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.170042  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:21.170047  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:21.170104  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:21.198867  485986 cri.go:89] found id: ""
	I1205 06:49:21.198881  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.198887  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:21.198893  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:21.198948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:21.224547  485986 cri.go:89] found id: ""
	I1205 06:49:21.224561  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.224568  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:21.224575  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:21.224585  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:21.291060  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:21.291081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:21.308799  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:21.308815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:21.380254  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:21.380264  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:21.380275  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:21.456817  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:21.456838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:23.986703  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:23.996959  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:23.997028  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:24.026421  485986 cri.go:89] found id: ""
	I1205 06:49:24.026435  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.026443  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:24.026450  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:24.026512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:24.056568  485986 cri.go:89] found id: ""
	I1205 06:49:24.056582  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.056589  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:24.056595  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:24.056654  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:24.082518  485986 cri.go:89] found id: ""
	I1205 06:49:24.082532  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.082539  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:24.082544  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:24.082605  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:24.108752  485986 cri.go:89] found id: ""
	I1205 06:49:24.108766  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.108783  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:24.108788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:24.108854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:24.142101  485986 cri.go:89] found id: ""
	I1205 06:49:24.142133  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.142140  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:24.142146  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:24.142214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:24.169035  485986 cri.go:89] found id: ""
	I1205 06:49:24.169050  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.169057  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:24.169067  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:24.169139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:24.194140  485986 cri.go:89] found id: ""
	I1205 06:49:24.194154  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.194161  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:24.194169  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:24.194179  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:24.269020  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:24.269042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:24.319041  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:24.319057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:24.402423  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:24.402446  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:24.418669  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:24.418687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:24.486837  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:26.988496  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:26.998567  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:26.998632  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:27.030118  485986 cri.go:89] found id: ""
	I1205 06:49:27.030131  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.030138  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:27.030144  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:27.030200  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:27.057209  485986 cri.go:89] found id: ""
	I1205 06:49:27.057224  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.057230  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:27.057236  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:27.057291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:27.083393  485986 cri.go:89] found id: ""
	I1205 06:49:27.083408  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.083415  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:27.083420  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:27.083480  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:27.108369  485986 cri.go:89] found id: ""
	I1205 06:49:27.108383  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.108390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:27.108394  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:27.108454  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:27.136631  485986 cri.go:89] found id: ""
	I1205 06:49:27.136645  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.136653  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:27.136659  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:27.136726  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:27.163262  485986 cri.go:89] found id: ""
	I1205 06:49:27.163277  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.163286  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:27.163294  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:27.163353  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:27.188133  485986 cri.go:89] found id: ""
	I1205 06:49:27.188152  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.188160  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:27.188167  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:27.188177  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:27.252259  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:27.252270  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:27.252280  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:27.330222  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:27.330243  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:27.360158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:27.360174  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:27.433608  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:27.433628  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:29.949566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:29.960768  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:29.960834  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:29.986155  485986 cri.go:89] found id: ""
	I1205 06:49:29.986169  485986 logs.go:282] 0 containers: []
	W1205 06:49:29.986176  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:29.986181  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:29.986241  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:30.063119  485986 cri.go:89] found id: ""
	I1205 06:49:30.063137  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.063144  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:30.063163  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:30.063243  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:30.093759  485986 cri.go:89] found id: ""
	I1205 06:49:30.093774  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.093782  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:30.093788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:30.093860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:30.123430  485986 cri.go:89] found id: ""
	I1205 06:49:30.123452  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.123460  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:30.123465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:30.123554  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:30.151722  485986 cri.go:89] found id: ""
	I1205 06:49:30.151744  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.151752  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:30.151758  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:30.151820  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:30.186802  485986 cri.go:89] found id: ""
	I1205 06:49:30.186831  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.186852  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:30.186859  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:30.186929  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:30.213270  485986 cri.go:89] found id: ""
	I1205 06:49:30.213293  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.213301  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:30.213309  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:30.213320  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:30.279872  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:30.279893  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:30.296737  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:30.296759  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:30.374429  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:30.374439  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:30.374450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:30.450678  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:30.450701  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:32.984051  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:32.993990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:32.994049  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:33.020636  485986 cri.go:89] found id: ""
	I1205 06:49:33.020650  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.020657  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:33.020663  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:33.020719  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:33.049013  485986 cri.go:89] found id: ""
	I1205 06:49:33.049027  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.049034  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:33.049039  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:33.049098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:33.078567  485986 cri.go:89] found id: ""
	I1205 06:49:33.078581  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.078588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:33.078594  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:33.078652  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:33.103212  485986 cri.go:89] found id: ""
	I1205 06:49:33.103226  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.103233  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:33.103238  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:33.103293  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:33.127983  485986 cri.go:89] found id: ""
	I1205 06:49:33.127997  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.128004  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:33.128030  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:33.128085  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:33.153777  485986 cri.go:89] found id: ""
	I1205 06:49:33.153792  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.153799  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:33.153805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:33.153863  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:33.178536  485986 cri.go:89] found id: ""
	I1205 06:49:33.178550  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.178557  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:33.178565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:33.178576  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:33.244570  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:33.244594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:33.259835  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:33.259851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:33.338788  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:33.338799  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:33.338810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:33.425207  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:33.425236  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:35.956397  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:35.966480  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:35.966543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:35.995353  485986 cri.go:89] found id: ""
	I1205 06:49:35.995367  485986 logs.go:282] 0 containers: []
	W1205 06:49:35.995374  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:35.995378  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:35.995435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:36.024388  485986 cri.go:89] found id: ""
	I1205 06:49:36.024403  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.024410  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:36.024415  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:36.024477  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:36.051022  485986 cri.go:89] found id: ""
	I1205 06:49:36.051036  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.051054  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:36.051059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:36.051124  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:36.076096  485986 cri.go:89] found id: ""
	I1205 06:49:36.076110  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.076117  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:36.076123  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:36.076180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:36.105105  485986 cri.go:89] found id: ""
	I1205 06:49:36.105119  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.105127  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:36.105131  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:36.105187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:36.131094  485986 cri.go:89] found id: ""
	I1205 06:49:36.131107  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.131114  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:36.131120  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:36.131180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:36.160327  485986 cri.go:89] found id: ""
	I1205 06:49:36.160342  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.160349  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:36.160357  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:36.160367  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:36.175190  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:36.175205  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:36.236428  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:36.236479  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:36.236489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:36.320584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:36.320608  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:36.354951  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:36.354968  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:38.924529  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:38.934948  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:38.935008  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:38.961612  485986 cri.go:89] found id: ""
	I1205 06:49:38.961626  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.961633  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:38.961638  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:38.961699  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:38.987542  485986 cri.go:89] found id: ""
	I1205 06:49:38.987562  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.987569  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:38.987574  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:38.987637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:39.017388  485986 cri.go:89] found id: ""
	I1205 06:49:39.017402  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.017409  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:39.017414  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:39.017475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:39.043798  485986 cri.go:89] found id: ""
	I1205 06:49:39.043813  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.043821  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:39.043826  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:39.043883  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:39.072134  485986 cri.go:89] found id: ""
	I1205 06:49:39.072148  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.072155  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:39.072160  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:39.072214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:39.097127  485986 cri.go:89] found id: ""
	I1205 06:49:39.097141  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.097148  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:39.097154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:39.097215  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:39.125406  485986 cri.go:89] found id: ""
	I1205 06:49:39.125420  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.125427  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:39.125434  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:39.125447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:39.191762  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:39.191782  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:39.206972  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:39.206987  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:39.274830  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:39.274841  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:39.274851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:39.365052  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:39.365073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:41.896143  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:41.906833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:41.906906  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:41.934416  485986 cri.go:89] found id: ""
	I1205 06:49:41.934430  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.934437  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:41.934442  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:41.934498  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:41.962049  485986 cri.go:89] found id: ""
	I1205 06:49:41.962063  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.962079  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:41.962084  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:41.962150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:41.991028  485986 cri.go:89] found id: ""
	I1205 06:49:41.991042  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.991049  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:41.991053  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:41.991121  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:42.020514  485986 cri.go:89] found id: ""
	I1205 06:49:42.020536  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.020544  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:42.020550  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:42.020614  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:42.047453  485986 cri.go:89] found id: ""
	I1205 06:49:42.047467  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.047474  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:42.047479  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:42.047535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:42.079004  485986 cri.go:89] found id: ""
	I1205 06:49:42.079019  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.079026  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:42.079033  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:42.079098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:42.110781  485986 cri.go:89] found id: ""
	I1205 06:49:42.110806  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.110814  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:42.110821  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:42.110832  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:42.191665  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:42.191688  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:42.241592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:42.241609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:42.314021  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:42.314041  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:42.331123  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:42.331139  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:42.401371  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:44.902557  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:44.913856  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:44.913928  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:44.944329  485986 cri.go:89] found id: ""
	I1205 06:49:44.944343  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.944350  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:44.944355  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:44.944411  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:44.972877  485986 cri.go:89] found id: ""
	I1205 06:49:44.972890  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.972897  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:44.972902  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:44.972961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:44.997771  485986 cri.go:89] found id: ""
	I1205 06:49:44.997785  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.997792  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:44.997797  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:44.997858  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:45.044196  485986 cri.go:89] found id: ""
	I1205 06:49:45.044212  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.044220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:45.044225  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:45.044296  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:45.100218  485986 cri.go:89] found id: ""
	I1205 06:49:45.100234  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.100242  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:45.100247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:45.100322  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:45.143680  485986 cri.go:89] found id: ""
	I1205 06:49:45.143696  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.143704  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:45.143710  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:45.144010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:45.184794  485986 cri.go:89] found id: ""
	I1205 06:49:45.184810  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.184818  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:45.184827  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:45.184840  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:45.266987  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:45.267020  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:45.286876  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:45.286913  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:45.370968  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:45.370979  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:45.370991  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:45.446768  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:45.446788  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:47.979096  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:47.989170  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:47.989236  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:48.018828  485986 cri.go:89] found id: ""
	I1205 06:49:48.018841  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.018849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:48.018854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:48.018915  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:48.048874  485986 cri.go:89] found id: ""
	I1205 06:49:48.048888  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.048895  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:48.048901  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:48.048960  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:48.075707  485986 cri.go:89] found id: ""
	I1205 06:49:48.075722  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.075728  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:48.075733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:48.075792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:48.100630  485986 cri.go:89] found id: ""
	I1205 06:49:48.100644  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.100651  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:48.100657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:48.100715  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:48.126176  485986 cri.go:89] found id: ""
	I1205 06:49:48.126190  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.126197  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:48.126202  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:48.126266  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:48.153143  485986 cri.go:89] found id: ""
	I1205 06:49:48.153157  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.153170  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:48.153181  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:48.153249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:48.179066  485986 cri.go:89] found id: ""
	I1205 06:49:48.179080  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.179087  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:48.179094  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:48.179104  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:48.238867  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:48.238878  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:48.238892  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:48.318473  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:48.318493  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:48.351978  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:48.352000  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:48.421167  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:48.421187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:50.939180  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:50.949233  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:50.949290  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:50.978828  485986 cri.go:89] found id: ""
	I1205 06:49:50.978842  485986 logs.go:282] 0 containers: []
	W1205 06:49:50.978849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:50.978854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:50.978910  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:51.004445  485986 cri.go:89] found id: ""
	I1205 06:49:51.004461  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.004469  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:51.004475  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:51.004545  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:51.032998  485986 cri.go:89] found id: ""
	I1205 06:49:51.033012  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.033019  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:51.033025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:51.033080  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:51.058907  485986 cri.go:89] found id: ""
	I1205 06:49:51.058921  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.058929  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:51.058934  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:51.058998  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:51.088751  485986 cri.go:89] found id: ""
	I1205 06:49:51.088765  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.088773  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:51.088778  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:51.088836  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:51.114739  485986 cri.go:89] found id: ""
	I1205 06:49:51.114753  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.114760  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:51.114766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:51.114827  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:51.146228  485986 cri.go:89] found id: ""
	I1205 06:49:51.146242  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.146249  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:51.146257  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:51.146267  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:51.213460  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:51.213479  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:51.228827  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:51.228842  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:51.295308  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:51.295318  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:51.295328  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:51.378866  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:51.378887  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:53.908370  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:53.918562  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:53.918621  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:53.944262  485986 cri.go:89] found id: ""
	I1205 06:49:53.944277  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.944284  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:53.944289  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:53.944349  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:53.969495  485986 cri.go:89] found id: ""
	I1205 06:49:53.969509  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.969516  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:53.969522  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:53.969602  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:53.996074  485986 cri.go:89] found id: ""
	I1205 06:49:53.996088  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.996095  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:53.996100  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:53.996155  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:54.023768  485986 cri.go:89] found id: ""
	I1205 06:49:54.023783  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.023790  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:54.023796  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:54.023854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:54.048370  485986 cri.go:89] found id: ""
	I1205 06:49:54.048385  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.048392  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:54.048397  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:54.048458  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:54.073241  485986 cri.go:89] found id: ""
	I1205 06:49:54.073255  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.073263  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:54.073268  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:54.073329  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:54.098794  485986 cri.go:89] found id: ""
	I1205 06:49:54.098808  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.098816  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:54.098824  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:54.098833  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:54.165835  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:54.165854  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:54.181432  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:54.181447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:54.255506  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:54.255516  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:54.255529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:54.341643  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:54.341666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:56.871077  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:56.883786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:56.883848  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:56.913242  485986 cri.go:89] found id: ""
	I1205 06:49:56.913255  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.913262  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:56.913268  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:56.913325  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:56.940834  485986 cri.go:89] found id: ""
	I1205 06:49:56.940849  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.940856  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:56.940863  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:56.940923  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:56.969612  485986 cri.go:89] found id: ""
	I1205 06:49:56.969626  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.969633  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:56.969639  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:56.969698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:56.996324  485986 cri.go:89] found id: ""
	I1205 06:49:56.996338  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.996345  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:56.996351  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:56.996412  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:57.023385  485986 cri.go:89] found id: ""
	I1205 06:49:57.023399  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.023407  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:57.023412  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:57.023470  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:57.047721  485986 cri.go:89] found id: ""
	I1205 06:49:57.047734  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.047741  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:57.047747  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:57.047803  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:57.072770  485986 cri.go:89] found id: ""
	I1205 06:49:57.072783  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.072790  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:57.072798  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:57.072807  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:57.137878  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:57.137898  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:57.153088  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:57.153110  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:57.215030  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:57.215041  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:57.215057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:57.298537  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:57.298556  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:59.836134  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:59.846404  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:59.846463  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:59.871308  485986 cri.go:89] found id: ""
	I1205 06:49:59.871322  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.871329  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:59.871333  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:59.871389  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:59.897753  485986 cri.go:89] found id: ""
	I1205 06:49:59.897767  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.897774  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:59.897779  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:59.897840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:59.922634  485986 cri.go:89] found id: ""
	I1205 06:49:59.922649  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.922655  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:59.922661  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:59.922721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:59.946450  485986 cri.go:89] found id: ""
	I1205 06:49:59.946463  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.946473  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:59.946478  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:59.946535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:59.972723  485986 cri.go:89] found id: ""
	I1205 06:49:59.972738  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.972745  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:59.972750  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:59.972809  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:00.021990  485986 cri.go:89] found id: ""
	I1205 06:50:00.022006  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.022014  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:00.022020  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:00.022097  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:00.144138  485986 cri.go:89] found id: ""
	I1205 06:50:00.144154  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.144162  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:00.144171  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:00.144184  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:00.257253  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:00.257284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:00.303408  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:00.303429  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:00.439913  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:00.439925  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:00.439937  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:00.532383  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:00.532408  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:03.067932  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:03.078353  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:03.078441  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:03.108943  485986 cri.go:89] found id: ""
	I1205 06:50:03.108957  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.108964  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:03.108969  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:03.109032  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:03.139046  485986 cri.go:89] found id: ""
	I1205 06:50:03.139060  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.139077  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:03.139082  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:03.139150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:03.166455  485986 cri.go:89] found id: ""
	I1205 06:50:03.166470  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.166479  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:03.166485  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:03.166587  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:03.195955  485986 cri.go:89] found id: ""
	I1205 06:50:03.195969  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.195976  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:03.195981  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:03.196037  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:03.221513  485986 cri.go:89] found id: ""
	I1205 06:50:03.221527  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.221539  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:03.221545  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:03.221616  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:03.250570  485986 cri.go:89] found id: ""
	I1205 06:50:03.250583  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.250589  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:03.250595  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:03.250649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:03.278449  485986 cri.go:89] found id: ""
	I1205 06:50:03.278463  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.278470  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:03.278477  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:03.278488  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:03.355784  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:03.355803  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:03.375344  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:03.375365  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:03.438665  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:03.438679  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:03.438690  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:03.518012  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:03.518040  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:06.053429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:06.064448  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:06.064511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:06.091072  485986 cri.go:89] found id: ""
	I1205 06:50:06.091087  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.091094  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:06.091100  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:06.091166  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:06.119823  485986 cri.go:89] found id: ""
	I1205 06:50:06.119837  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.119844  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:06.119849  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:06.119905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:06.148798  485986 cri.go:89] found id: ""
	I1205 06:50:06.148812  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.148819  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:06.148824  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:06.148880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:06.179319  485986 cri.go:89] found id: ""
	I1205 06:50:06.179334  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.179341  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:06.179346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:06.179402  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:06.204637  485986 cri.go:89] found id: ""
	I1205 06:50:06.204652  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.204659  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:06.204665  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:06.204727  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:06.232891  485986 cri.go:89] found id: ""
	I1205 06:50:06.232906  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.232913  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:06.232919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:06.232977  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:06.260874  485986 cri.go:89] found id: ""
	I1205 06:50:06.260888  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.260895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:06.260904  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:06.260914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:06.331930  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:06.331950  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:06.349062  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:06.349078  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:06.413245  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:06.413254  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:06.413265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:06.491562  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:06.491584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:09.021435  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:09.031990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:09.032051  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:09.057732  485986 cri.go:89] found id: ""
	I1205 06:50:09.057746  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.057753  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:09.057758  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:09.057814  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:09.085296  485986 cri.go:89] found id: ""
	I1205 06:50:09.085309  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.085316  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:09.085321  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:09.085377  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:09.113133  485986 cri.go:89] found id: ""
	I1205 06:50:09.113147  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.113154  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:09.113159  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:09.113221  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:09.139103  485986 cri.go:89] found id: ""
	I1205 06:50:09.139117  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.139125  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:09.139130  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:09.139196  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:09.171980  485986 cri.go:89] found id: ""
	I1205 06:50:09.171995  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.172005  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:09.172011  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:09.172066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:09.197034  485986 cri.go:89] found id: ""
	I1205 06:50:09.197048  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.197055  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:09.197059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:09.197122  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:09.222626  485986 cri.go:89] found id: ""
	I1205 06:50:09.222641  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.222649  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:09.222656  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:09.222667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:09.288268  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:09.288287  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:09.304011  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:09.304027  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:09.378142  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:09.378151  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:09.378162  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:09.455057  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:09.455077  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:11.984604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:11.994696  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:11.994758  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:12.021691  485986 cri.go:89] found id: ""
	I1205 06:50:12.021706  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.021713  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:12.021718  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:12.021777  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:12.049086  485986 cri.go:89] found id: ""
	I1205 06:50:12.049099  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.049106  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:12.049111  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:12.049170  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:12.077335  485986 cri.go:89] found id: ""
	I1205 06:50:12.077348  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.077355  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:12.077360  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:12.077419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:12.104976  485986 cri.go:89] found id: ""
	I1205 06:50:12.104990  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.104998  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:12.105003  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:12.105065  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:12.130275  485986 cri.go:89] found id: ""
	I1205 06:50:12.130289  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.130297  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:12.130303  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:12.130359  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:12.156777  485986 cri.go:89] found id: ""
	I1205 06:50:12.156791  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.156798  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:12.156804  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:12.156862  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:12.184468  485986 cri.go:89] found id: ""
	I1205 06:50:12.184482  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.184489  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:12.184496  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:12.184506  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:12.250190  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:12.250212  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:12.265279  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:12.265295  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:12.350637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:12.350648  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:12.350659  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:12.429523  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:12.429548  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:14.958454  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:14.970034  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:14.970110  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:14.996731  485986 cri.go:89] found id: ""
	I1205 06:50:14.996754  485986 logs.go:282] 0 containers: []
	W1205 06:50:14.996761  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:14.996767  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:14.996833  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:15.032417  485986 cri.go:89] found id: ""
	I1205 06:50:15.032440  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.032448  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:15.032454  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:15.032524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:15.060989  485986 cri.go:89] found id: ""
	I1205 06:50:15.061008  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.061016  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:15.061022  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:15.061083  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:15.088194  485986 cri.go:89] found id: ""
	I1205 06:50:15.088208  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.088215  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:15.088221  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:15.088280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:15.115923  485986 cri.go:89] found id: ""
	I1205 06:50:15.115938  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.115945  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:15.115951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:15.116010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:15.146014  485986 cri.go:89] found id: ""
	I1205 06:50:15.146028  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.146035  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:15.146041  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:15.146150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:15.173160  485986 cri.go:89] found id: ""
	I1205 06:50:15.173175  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.173191  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:15.173199  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:15.173208  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:15.245690  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:15.245700  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:15.245710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:15.325395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:15.325417  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:15.356222  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:15.356276  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:15.428176  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:15.428198  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:17.943733  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:17.954302  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:17.954363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:17.979858  485986 cri.go:89] found id: ""
	I1205 06:50:17.979872  485986 logs.go:282] 0 containers: []
	W1205 06:50:17.979879  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:17.979884  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:17.979948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:18.013482  485986 cri.go:89] found id: ""
	I1205 06:50:18.013497  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.013504  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:18.013509  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:18.013593  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:18.040079  485986 cri.go:89] found id: ""
	I1205 06:50:18.040094  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.040102  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:18.040108  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:18.040172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:18.066285  485986 cri.go:89] found id: ""
	I1205 06:50:18.066300  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.066308  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:18.066312  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:18.066369  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:18.091446  485986 cri.go:89] found id: ""
	I1205 06:50:18.091461  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.091468  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:18.091473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:18.091532  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:18.121218  485986 cri.go:89] found id: ""
	I1205 06:50:18.121234  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.121241  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:18.121247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:18.121306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:18.147004  485986 cri.go:89] found id: ""
	I1205 06:50:18.147018  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.147032  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:18.147039  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:18.147050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:18.212973  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:18.212983  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:18.212993  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:18.290491  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:18.290510  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:18.319970  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:18.319986  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:18.392419  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:18.392440  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:20.907875  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:20.918552  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:20.918615  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:20.948914  485986 cri.go:89] found id: ""
	I1205 06:50:20.948928  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.948935  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:20.948941  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:20.948999  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:20.974289  485986 cri.go:89] found id: ""
	I1205 06:50:20.974303  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.974310  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:20.974315  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:20.974371  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:20.999954  485986 cri.go:89] found id: ""
	I1205 06:50:20.999968  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.999976  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:20.999980  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:21.000038  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:21.029788  485986 cri.go:89] found id: ""
	I1205 06:50:21.029803  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.029810  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:21.029815  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:21.029875  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:21.055163  485986 cri.go:89] found id: ""
	I1205 06:50:21.055177  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.055183  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:21.055188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:21.055246  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:21.080955  485986 cri.go:89] found id: ""
	I1205 06:50:21.080969  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.080977  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:21.080982  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:21.081052  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:21.108615  485986 cri.go:89] found id: ""
	I1205 06:50:21.108629  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.108637  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:21.108644  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:21.108655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:21.173790  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:21.173811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:21.188952  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:21.188969  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:21.253459  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:21.253469  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:21.253480  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:21.337063  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:21.337084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:23.866768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:23.877363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:23.877430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:23.903790  485986 cri.go:89] found id: ""
	I1205 06:50:23.903807  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.903814  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:23.903819  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:23.903880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:23.933319  485986 cri.go:89] found id: ""
	I1205 06:50:23.933333  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.933341  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:23.933346  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:23.933403  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:23.959901  485986 cri.go:89] found id: ""
	I1205 06:50:23.959914  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.959922  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:23.959927  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:23.959987  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:23.986070  485986 cri.go:89] found id: ""
	I1205 06:50:23.986083  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.986090  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:23.986096  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:23.986154  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:24.014309  485986 cri.go:89] found id: ""
	I1205 06:50:24.014324  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.014331  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:24.014336  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:24.014422  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:24.040569  485986 cri.go:89] found id: ""
	I1205 06:50:24.040590  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.040598  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:24.040603  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:24.040663  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:24.066648  485986 cri.go:89] found id: ""
	I1205 06:50:24.066661  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.066669  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:24.066676  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:24.066687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:24.145239  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:24.145259  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:24.173133  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:24.173149  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:24.238469  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:24.238489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:24.253802  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:24.253821  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:24.341051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:26.841329  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:26.852711  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:26.852792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:26.878845  485986 cri.go:89] found id: ""
	I1205 06:50:26.878858  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.878865  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:26.878871  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:26.878926  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:26.903460  485986 cri.go:89] found id: ""
	I1205 06:50:26.903475  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.903482  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:26.903487  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:26.903543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:26.928316  485986 cri.go:89] found id: ""
	I1205 06:50:26.928330  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.928337  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:26.928342  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:26.928401  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:26.957464  485986 cri.go:89] found id: ""
	I1205 06:50:26.957477  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.957484  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:26.957490  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:26.957547  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:26.985494  485986 cri.go:89] found id: ""
	I1205 06:50:26.985508  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.985515  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:26.985520  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:26.985588  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:27.012077  485986 cri.go:89] found id: ""
	I1205 06:50:27.012092  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.012099  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:27.012105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:27.012164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:27.037759  485986 cri.go:89] found id: ""
	I1205 06:50:27.037772  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.037779  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:27.037802  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:27.037813  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:27.068005  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:27.068022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:27.132023  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:27.132042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:27.147964  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:27.147981  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:27.210077  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:27.210087  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:27.210098  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:29.784398  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:29.794460  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:29.794523  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:29.820207  485986 cri.go:89] found id: ""
	I1205 06:50:29.820221  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.820228  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:29.820235  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:29.820301  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:29.845407  485986 cri.go:89] found id: ""
	I1205 06:50:29.845421  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.845429  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:29.845434  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:29.845494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:29.871350  485986 cri.go:89] found id: ""
	I1205 06:50:29.871364  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.871371  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:29.871376  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:29.871434  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:29.896668  485986 cri.go:89] found id: ""
	I1205 06:50:29.896682  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.896689  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:29.896694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:29.896753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:29.925230  485986 cri.go:89] found id: ""
	I1205 06:50:29.925243  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.925250  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:29.925256  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:29.925320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:29.950431  485986 cri.go:89] found id: ""
	I1205 06:50:29.950445  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.950453  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:29.950459  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:29.950516  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:29.975493  485986 cri.go:89] found id: ""
	I1205 06:50:29.975507  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.975514  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:29.975522  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:29.975532  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:29.990544  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:29.990561  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:30.089331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:30.089343  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:30.089355  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:30.176998  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:30.177019  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:30.207325  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:30.207342  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:32.779616  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:32.789524  485986 kubeadm.go:602] duration metric: took 4m3.78523296s to restartPrimaryControlPlane
	W1205 06:50:32.789596  485986 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:50:32.789791  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:50:33.200382  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:50:33.213168  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:50:33.221236  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:50:33.221295  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:50:33.229165  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:50:33.229174  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:50:33.229226  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:50:33.236961  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:50:33.237026  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:50:33.244309  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:50:33.252201  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:50:33.252257  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:50:33.259677  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.267359  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:50:33.267427  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.275464  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:50:33.283208  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:50:33.283271  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:50:33.290746  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:50:33.405156  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:50:33.405615  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:50:33.478173  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:54:34.582933  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:54:34.582957  485986 kubeadm.go:319] 
	I1205 06:54:34.583076  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:54:34.588185  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:34.588247  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:34.588363  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:34.588446  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:34.588482  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:34.588527  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:34.588597  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:34.588649  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:34.588697  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:34.588744  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:34.588792  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:34.588836  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:34.588883  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:34.588934  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:34.589006  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:34.589099  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:34.589189  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:34.589249  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:34.592315  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:34.592403  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:34.592463  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:34.592535  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:34.592603  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:34.592668  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:34.592743  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:34.592810  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:34.592871  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:34.592953  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:34.593046  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:34.593088  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:34.593139  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:34.593190  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:34.593242  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:34.593294  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:34.593352  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:34.593406  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:34.593499  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:34.593561  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:34.596524  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:34.596625  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:34.596698  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:34.596789  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:34.596910  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:34.597004  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:34.597119  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:34.597212  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:34.597250  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:34.597382  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:34.597485  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:54:34.597547  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00128632s
	I1205 06:54:34.597550  485986 kubeadm.go:319] 
	I1205 06:54:34.597605  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:54:34.597636  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:54:34.597743  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:54:34.597746  485986 kubeadm.go:319] 
	I1205 06:54:34.597848  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:54:34.597879  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:54:34.597909  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 06:54:34.598022  485986 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128632s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:54:34.598117  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:54:34.598416  485986 kubeadm.go:319] 
	I1205 06:54:35.010606  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:54:35.026641  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:54:35.026696  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:54:35.034906  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:54:35.034914  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:54:35.034968  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:54:35.043100  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:54:35.043156  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:54:35.050682  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:54:35.058435  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:54:35.058491  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:54:35.066352  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.075006  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:54:35.075083  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.083161  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:54:35.091527  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:54:35.091591  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:54:35.099509  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:54:35.143144  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:35.143194  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:35.214737  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:35.214806  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:35.214841  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:35.214894  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:35.214941  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:35.214988  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:35.215036  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:35.215082  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:35.215135  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:35.215179  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:35.215227  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:35.215272  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:35.280867  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:35.280975  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:35.281065  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:35.290789  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:35.294356  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:35.294469  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:35.294532  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:35.294608  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:35.294667  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:35.294735  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:35.294788  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:35.294850  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:35.294910  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:35.294989  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:35.295060  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:35.295097  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:35.295152  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:35.600230  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:35.819372  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:36.031672  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:36.347784  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:36.515743  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:36.516403  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:36.519035  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:36.522469  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:36.522648  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:36.522737  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:36.522811  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:36.538750  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:36.538854  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:36.547809  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:36.548944  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:36.549484  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:36.685042  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:36.685156  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:58:36.684952  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233025s
	I1205 06:58:36.684982  485986 kubeadm.go:319] 
	I1205 06:58:36.685040  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:58:36.685073  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:58:36.685203  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:58:36.685213  485986 kubeadm.go:319] 
	I1205 06:58:36.685319  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:58:36.685352  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:58:36.685382  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:58:36.685386  485986 kubeadm.go:319] 
	I1205 06:58:36.690024  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:58:36.690504  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:58:36.690648  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:58:36.690898  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:58:36.690904  485986 kubeadm.go:319] 
	I1205 06:58:36.690971  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:58:36.691025  485986 kubeadm.go:403] duration metric: took 12m7.722207493s to StartCluster
	I1205 06:58:36.691058  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:58:36.691120  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:58:36.717503  485986 cri.go:89] found id: ""
	I1205 06:58:36.717522  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.717530  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:58:36.717535  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:58:36.717599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:58:36.742068  485986 cri.go:89] found id: ""
	I1205 06:58:36.742083  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.742090  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:58:36.742095  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:58:36.742150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:58:36.766426  485986 cri.go:89] found id: ""
	I1205 06:58:36.766439  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.766446  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:58:36.766452  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:58:36.766507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:58:36.791681  485986 cri.go:89] found id: ""
	I1205 06:58:36.791696  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.791703  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:58:36.791707  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:58:36.791767  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:58:36.816243  485986 cri.go:89] found id: ""
	I1205 06:58:36.816257  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.816264  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:58:36.816269  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:58:36.816323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:58:36.841386  485986 cri.go:89] found id: ""
	I1205 06:58:36.841399  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.841406  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:58:36.841411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:58:36.841467  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:58:36.866554  485986 cri.go:89] found id: ""
	I1205 06:58:36.866568  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.866575  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:58:36.866584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:58:36.866594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:58:36.900565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:58:36.900582  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:58:36.968215  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:58:36.968234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:58:36.983291  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:58:36.983307  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:58:37.054622  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:58:37.054632  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:58:37.054644  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1205 06:58:37.134983  485986 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:58:37.135045  485986 out.go:285] * 
	W1205 06:58:37.135160  485986 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.135223  485986 out.go:285] * 
	W1205 06:58:37.137432  485986 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:58:37.142766  485986 out.go:203] 
	W1205 06:58:37.146311  485986 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.146363  485986 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:58:37.146497  485986 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:58:37.150301  485986 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571372953Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571407981Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571472524Z" level=info msg="Create NRI interface"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571571528Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571581186Z" level=info msg="runtime interface created"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571594224Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571600657Z" level=info msg="runtime interface starting up..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571606548Z" level=info msg="starting plugins..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571619709Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571689602Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:46:27 functional-787602 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.481366601Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1e822775-5cef-40d3-9686-eee6d086f1b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482224852Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e1ef1844-3877-40e0-84c2-d1c873b40d24 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482740149Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=56b0a0d4-9f66-4348-9e04-1e53dd2684db name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483228025Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=beb5cc41-ecba-44e2-8431-8eb7caf9e6f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483764967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d6fbbe20-116f-42f6-8365-a643bfd6a022 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484325426Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cc465c27-997c-4720-add0-d2aaefef1742 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484777542Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5847487f-12af-4b83-83de-0b1cf4bc7dd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.284218578Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=9fcc6ad9-fc72-42e2-9eb3-af609b8c0fda name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285002572Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0a9c3300-2647-489a-a8c7-299acd2c2ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285494328Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ef012813-7294-42de-84e3-c56b0aecceed name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285987553Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=97a76923-ddd0-413b-afdb-1a86b6e1781b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.286464253Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=72332317-e652-4a97-9d17-3ba7818fe38f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.28695984Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=443ec697-27e1-4420-9454-8afdb0ee65b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.287383469Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7f6d73c6-60bf-4743-9f1c-60ae6c282918 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:58:40.578959   22001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:40.579690   22001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:40.580472   22001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:40.581299   22001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:40.582960   22001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:58:40 up  3:40,  0 user,  load average: 0.26, 0.24, 0.36
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 06:58:37 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:38 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 05 06:58:38 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:38 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:38 functional-787602 kubelet[21876]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:38 functional-787602 kubelet[21876]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:38 functional-787602 kubelet[21876]: E1205 06:58:38.583183   21876 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:38 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:38 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:39 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 643.
	Dec 05 06:58:39 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:39 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:39 functional-787602 kubelet[21895]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:39 functional-787602 kubelet[21895]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:39 functional-787602 kubelet[21895]: E1205 06:58:39.271017   21895 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:39 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:39 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 06:58:40 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 644.
	Dec 05 06:58:40 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:40 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 06:58:40 functional-787602 kubelet[21918]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:40 functional-787602 kubelet[21918]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 06:58:40 functional-787602 kubelet[21918]: E1205 06:58:40.105626   21918 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 06:58:40 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 06:58:40 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (388.310602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-787602 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-787602 apply -f testdata/invalidsvc.yaml: exit status 1 (54.947798ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-787602 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787602 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787602 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787602 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787602 --alsologtostderr -v=1] stderr:
I1205 07:00:49.930556  503058 out.go:360] Setting OutFile to fd 1 ...
I1205 07:00:49.930693  503058 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:00:49.930704  503058 out.go:374] Setting ErrFile to fd 2...
I1205 07:00:49.930709  503058 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:00:49.931071  503058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 07:00:49.931392  503058 mustload.go:66] Loading cluster: functional-787602
I1205 07:00:49.932089  503058 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:00:49.933157  503058 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 07:00:49.949719  503058 host.go:66] Checking if "functional-787602" exists ...
I1205 07:00:49.950062  503058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 07:00:50.014469  503058 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.997103696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 07:00:50.014611  503058 api_server.go:166] Checking apiserver status ...
I1205 07:00:50.014692  503058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 07:00:50.014738  503058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 07:00:50.048525  503058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
W1205 07:00:50.156090  503058 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1205 07:00:50.159324  503058 out.go:179] * The control-plane node functional-787602 apiserver is not running: (state=Stopped)
I1205 07:00:50.162194  503058 out.go:179]   To start a cluster, run: "minikube start -p functional-787602"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (314.440426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-787602 service hello-node --url --format={{.IP}}                                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ service   │ functional-787602 service hello-node --url                                                                                                         │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001:/mount-9p --alsologtostderr -v=1             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh -- ls -la /mount-9p                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh cat /mount-9p/test-1764918040672352845                                                                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh sudo umount -f /mount-9p                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo535009720/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh -- ls -la /mount-9p                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh sudo umount -f /mount-9p                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount1 --alsologtostderr -v=1               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount1                                                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount3 --alsologtostderr -v=1               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount2 --alsologtostderr -v=1               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount2                                                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh findmnt -T /mount3                                                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount     │ -p functional-787602 --kill=true                                                                                                                   │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ start     │ -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ start     │ -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ start     │ -p functional-787602 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-787602 --alsologtostderr -v=1                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:00:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:00:49.699416  502985 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:49.699614  502985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.699641  502985 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:49.699659  502985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.699940  502985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:00:49.700305  502985 out.go:368] Setting JSON to false
	I1205 07:00:49.701230  502985 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13377,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:00:49.701328  502985 start.go:143] virtualization:  
	I1205 07:00:49.704521  502985 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:00:49.707554  502985 notify.go:221] Checking for updates...
	I1205 07:00:49.708410  502985 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:00:49.711339  502985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:00:49.714116  502985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:00:49.717016  502985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:00:49.719832  502985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:00:49.722711  502985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:00:49.726030  502985 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:00:49.726712  502985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:00:49.759422  502985 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:00:49.759531  502985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.816255  502985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.807258963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.816364  502985 docker.go:319] overlay module found
	I1205 07:00:49.819519  502985 out.go:179] * Using the docker driver based on existing profile
	I1205 07:00:49.822352  502985 start.go:309] selected driver: docker
	I1205 07:00:49.822400  502985 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.822507  502985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:00:49.822624  502985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.878784  502985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.869881278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.879217  502985 cni.go:84] Creating CNI manager for ""
	I1205 07:00:49.879293  502985 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:00:49.879338  502985 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.882317  502985 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571372953Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571407981Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571472524Z" level=info msg="Create NRI interface"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571571528Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571581186Z" level=info msg="runtime interface created"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571594224Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571600657Z" level=info msg="runtime interface starting up..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571606548Z" level=info msg="starting plugins..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571619709Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571689602Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:46:27 functional-787602 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.481366601Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1e822775-5cef-40d3-9686-eee6d086f1b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482224852Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e1ef1844-3877-40e0-84c2-d1c873b40d24 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482740149Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=56b0a0d4-9f66-4348-9e04-1e53dd2684db name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483228025Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=beb5cc41-ecba-44e2-8431-8eb7caf9e6f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483764967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d6fbbe20-116f-42f6-8365-a643bfd6a022 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484325426Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cc465c27-997c-4720-add0-d2aaefef1742 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484777542Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5847487f-12af-4b83-83de-0b1cf4bc7dd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.284218578Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=9fcc6ad9-fc72-42e2-9eb3-af609b8c0fda name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285002572Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0a9c3300-2647-489a-a8c7-299acd2c2ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285494328Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ef012813-7294-42de-84e3-c56b0aecceed name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285987553Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=97a76923-ddd0-413b-afdb-1a86b6e1781b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.286464253Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=72332317-e652-4a97-9d17-3ba7818fe38f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.28695984Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=443ec697-27e1-4420-9454-8afdb0ee65b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.287383469Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7f6d73c6-60bf-4743-9f1c-60ae6c282918 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:00:51.199989   24057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:51.200475   24057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:51.202181   24057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:51.202529   24057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:51.204000   24057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:00:51 up  3:42,  0 user,  load average: 0.84, 0.37, 0.38
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:00:48 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:49 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 816.
	Dec 05 07:00:49 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:49 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:49 functional-787602 kubelet[23938]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:49 functional-787602 kubelet[23938]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:49 functional-787602 kubelet[23938]: E1205 07:00:49.335329   23938 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:49 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:49 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:50 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 817.
	Dec 05 07:00:50 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:50 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:50 functional-787602 kubelet[23944]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:50 functional-787602 kubelet[23944]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:50 functional-787602 kubelet[23944]: E1205 07:00:50.083748   23944 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:50 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:50 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:50 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 818.
	Dec 05 07:00:50 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:50 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:50 functional-787602 kubelet[23974]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:50 functional-787602 kubelet[23974]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:50 functional-787602 kubelet[23974]: E1205 07:00:50.830913   23974 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:50 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:50 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (343.838971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 status: exit status 2 (303.054459ms)

                                                
                                                
-- stdout --
	functional-787602
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-787602 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (315.985217ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-787602 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 status -o json: exit status 2 (353.204327ms)

                                                
                                                
-- stdout --
	{"Name":"functional-787602","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-787602 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (316.000659ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-787602 addons list -o json                                                                                                              │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ service │ functional-787602 service list                                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ service │ functional-787602 service list -o json                                                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ service │ functional-787602 service --namespace=default --https --url hello-node                                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ service │ functional-787602 service hello-node --url --format={{.IP}}                                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ service │ functional-787602 service hello-node --url                                                                                                         │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ mount   │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001:/mount-9p --alsologtostderr -v=1             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh     │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh     │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh     │ functional-787602 ssh -- ls -la /mount-9p                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh     │ functional-787602 ssh cat /mount-9p/test-1764918040672352845                                                                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh     │ functional-787602 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh     │ functional-787602 ssh sudo umount -f /mount-9p                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount   │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo535009720/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh     │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh     │ functional-787602 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh     │ functional-787602 ssh -- ls -la /mount-9p                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh     │ functional-787602 ssh sudo umount -f /mount-9p                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ mount   │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount1 --alsologtostderr -v=1               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh     │ functional-787602 ssh findmnt -T /mount1                                                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount   │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount3 --alsologtostderr -v=1               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ mount   │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount2 --alsologtostderr -v=1               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh     │ functional-787602 ssh findmnt -T /mount2                                                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh     │ functional-787602 ssh findmnt -T /mount3                                                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount   │ -p functional-787602 --kill=true                                                                                                                   │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:46:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:46:23.060483  485986 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:46:23.060587  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060592  485986 out.go:374] Setting ErrFile to fd 2...
	I1205 06:46:23.060596  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060943  485986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:46:23.061383  485986 out.go:368] Setting JSON to false
	I1205 06:46:23.062251  485986 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12510,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:46:23.062334  485986 start.go:143] virtualization:  
	I1205 06:46:23.066082  485986 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:46:23.069981  485986 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:46:23.070104  485986 notify.go:221] Checking for updates...
	I1205 06:46:23.076003  485986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:46:23.078837  485986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:46:23.081722  485986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:46:23.084680  485986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:46:23.087568  485986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:46:23.090922  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:23.091022  485986 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:46:23.121487  485986 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:46:23.121590  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.189036  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.180099644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.189132  485986 docker.go:319] overlay module found
	I1205 06:46:23.192176  485986 out.go:179] * Using the docker driver based on existing profile
	I1205 06:46:23.195026  485986 start.go:309] selected driver: docker
	I1205 06:46:23.195034  485986 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.195143  485986 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:46:23.195245  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.259735  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.25087077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.260168  485986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:46:23.260193  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:23.260245  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:23.260292  485986 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.263405  485986 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:46:23.266278  485986 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:46:23.269305  485986 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:46:23.272128  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:23.272198  485986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:46:23.291679  485986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:46:23.291691  485986 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:46:23.331907  485986 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:46:24.681828  485986 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:46:24.681963  485986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:46:24.682057  485986 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682183  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:46:24.682191  485986 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.111µs
	I1205 06:46:24.682203  485986 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:46:24.682212  485986 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682238  485986 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:46:24.682242  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:46:24.682246  485986 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.143µs
	I1205 06:46:24.682251  485986 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682266  485986 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682260  485986 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682305  485986 start.go:364] duration metric: took 27.331µs to acquireMachinesLock for "functional-787602"
	I1205 06:46:24.682303  485986 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682317  485986 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:46:24.682322  485986 fix.go:54] fixHost starting: 
	I1205 06:46:24.682343  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:46:24.682348  485986 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.295µs
	I1205 06:46:24.682354  485986 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:46:24.682364  485986 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682416  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:46:24.682421  485986 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.691µs
	I1205 06:46:24.682428  485986 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682437  485986 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682462  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:46:24.682466  485986 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.31µs
	I1205 06:46:24.682471  485986 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682480  485986 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682514  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:46:24.682518  485986 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.27µs
	I1205 06:46:24.682523  485986 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:46:24.682531  485986 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682555  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:46:24.682558  485986 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.283µs
	I1205 06:46:24.682568  485986 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:46:24.682583  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:46:24.682587  485986 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 328.529µs
	I1205 06:46:24.682591  485986 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682599  485986 cache.go:87] Successfully saved all images to host disk.
	I1205 06:46:24.682614  485986 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:46:24.699421  485986 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:46:24.699440  485986 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:46:24.704636  485986 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:46:24.704669  485986 machine.go:94] provisionDockerMachine start ...
	I1205 06:46:24.704752  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.722297  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.722651  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.722658  485986 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:46:24.869775  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:24.869801  485986 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:46:24.869864  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.887234  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.887558  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.887567  485986 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:46:25.047727  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:25.047810  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.066336  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.066675  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.066689  485986 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:46:25.218719  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:46:25.218735  485986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:46:25.218754  485986 ubuntu.go:190] setting up certificates
	I1205 06:46:25.218762  485986 provision.go:84] configureAuth start
	I1205 06:46:25.218833  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:25.236317  485986 provision.go:143] copyHostCerts
	I1205 06:46:25.236383  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:46:25.236396  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:46:25.236468  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:46:25.236562  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:46:25.236565  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:46:25.236589  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:46:25.236636  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:46:25.236640  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:46:25.236661  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:46:25.236704  485986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:46:25.509369  485986 provision.go:177] copyRemoteCerts
	I1205 06:46:25.509433  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:46:25.509483  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.526532  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:25.630074  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:46:25.647569  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:46:25.665563  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:46:25.683160  485986 provision.go:87] duration metric: took 464.374115ms to configureAuth
	I1205 06:46:25.683179  485986 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:46:25.683380  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:25.683487  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.701466  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.701775  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.701787  485986 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:46:26.045147  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:46:26.045161  485986 machine.go:97] duration metric: took 1.340485738s to provisionDockerMachine
	I1205 06:46:26.045171  485986 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:46:26.045182  485986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:46:26.045240  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:46:26.045301  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.071462  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.178226  485986 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:46:26.181599  485986 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:46:26.181617  485986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:46:26.181627  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:46:26.181684  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:46:26.181759  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:46:26.181833  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:46:26.181875  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:46:26.189500  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:26.206597  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:46:26.223486  485986 start.go:296] duration metric: took 178.3022ms for postStartSetup
	I1205 06:46:26.223577  485986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:46:26.223614  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.239842  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.339498  485986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:46:26.344313  485986 fix.go:56] duration metric: took 1.66198384s for fixHost
	I1205 06:46:26.344329  485986 start.go:83] releasing machines lock for "functional-787602", held for 1.662017843s
	I1205 06:46:26.344396  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:26.361695  485986 ssh_runner.go:195] Run: cat /version.json
	I1205 06:46:26.361744  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.361773  485986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:46:26.361823  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.380556  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.389997  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.566296  485986 ssh_runner.go:195] Run: systemctl --version
	I1205 06:46:26.572676  485986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:46:26.609041  485986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:46:26.613450  485986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:46:26.613514  485986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:46:26.621451  485986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:46:26.621466  485986 start.go:496] detecting cgroup driver to use...
	I1205 06:46:26.621496  485986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:46:26.621543  485986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:46:26.637300  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:46:26.650753  485986 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:46:26.650821  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:46:26.666902  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:46:26.680209  485986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:46:26.795240  485986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:46:26.925661  485986 docker.go:234] disabling docker service ...
	I1205 06:46:26.925721  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:46:26.941529  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:46:26.954708  485986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:46:27.063545  485986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:46:27.175808  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:46:27.188517  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:46:27.203590  485986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:46:27.203644  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.212003  485986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:46:27.212066  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.220691  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.229907  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.238922  485986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:46:27.247339  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.256340  485986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.264720  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.273692  485986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:46:27.281324  485986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:46:27.288509  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.394627  485986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:46:27.581943  485986 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:46:27.582023  485986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:46:27.586836  485986 start.go:564] Will wait 60s for crictl version
	I1205 06:46:27.586892  485986 ssh_runner.go:195] Run: which crictl
	I1205 06:46:27.591027  485986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:46:27.618052  485986 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:46:27.618154  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.654922  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.689535  485986 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:46:27.692450  485986 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:46:27.709456  485986 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:46:27.716890  485986 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:46:27.719774  485986 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:46:27.719904  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:27.719957  485986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:46:27.756745  485986 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:46:27.756757  485986 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:46:27.756762  485986 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:46:27.756860  485986 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:46:27.756933  485986 ssh_runner.go:195] Run: crio config
	I1205 06:46:27.826615  485986 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:46:27.826635  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:27.826644  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:27.826657  485986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:46:27.826679  485986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:46:27.826795  485986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:46:27.826871  485986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:46:27.834649  485986 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:46:27.834712  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:46:27.842099  485986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:46:27.855421  485986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:46:27.868701  485986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1205 06:46:27.882058  485986 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:46:27.885936  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.995572  485986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:46:28.275034  485986 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:46:28.275045  485986 certs.go:195] generating shared ca certs ...
	I1205 06:46:28.275061  485986 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:46:28.275249  485986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:46:28.275292  485986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:46:28.275298  485986 certs.go:257] generating profile certs ...
	I1205 06:46:28.275410  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:46:28.275475  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:46:28.275515  485986 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:46:28.275644  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:46:28.275677  485986 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:46:28.275685  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:46:28.275720  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:46:28.275747  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:46:28.275784  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:46:28.275832  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:28.276503  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:46:28.298544  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:46:28.319289  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:46:28.339576  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:46:28.358300  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:46:28.376540  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:46:28.394872  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:46:28.412281  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:46:28.429993  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:46:28.447492  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:46:28.464800  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:46:28.482269  485986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:46:28.494984  485986 ssh_runner.go:195] Run: openssl version
	I1205 06:46:28.501339  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.508762  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:46:28.516382  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520092  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520163  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.563665  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:46:28.571080  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.578338  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:46:28.585799  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589656  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589716  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.631223  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:46:28.638732  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.646106  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:46:28.653539  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657103  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657161  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.698123  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:46:28.706515  485986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:46:28.710605  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:46:28.754183  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:46:28.798105  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:46:28.841637  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:46:28.883652  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:46:28.926486  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:46:28.968827  485986 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:28.968900  485986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:46:28.968973  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:28.995506  485986 cri.go:89] found id: ""
	I1205 06:46:28.995567  485986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:46:29.004262  485986 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:46:29.004281  485986 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:46:29.004345  485986 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:46:29.012409  485986 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.012971  485986 kubeconfig.go:125] found "functional-787602" server: "https://192.168.49.2:8441"
	I1205 06:46:29.014556  485986 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:46:29.022548  485986 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:31:50.409182079 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:46:27.876278809 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:46:29.022570  485986 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:46:29.022584  485986 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 06:46:29.022652  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:29.056958  485986 cri.go:89] found id: ""
	I1205 06:46:29.057019  485986 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:46:29.073934  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:46:29.081656  485986 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5623 Dec  5 06:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  5 06:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  5 06:35 /etc/kubernetes/scheduler.conf
	
	I1205 06:46:29.081722  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:46:29.089572  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:46:29.097486  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.097543  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:46:29.105088  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.112583  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.112639  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.120188  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:46:29.127909  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.127966  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:46:29.135508  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:46:29.143544  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:29.190973  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.485506  485986 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.294504309s)
	I1205 06:46:30.485577  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.689694  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.752398  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.798299  485986 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:46:30.798367  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.299303  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.799420  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.299360  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.798577  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.298564  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.799310  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.298783  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.799510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.299369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.799119  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.298663  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.798517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.299207  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.298684  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.798475  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.299188  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.799197  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.299101  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.798572  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.298546  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.298563  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.799429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.299246  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.298849  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.799336  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.298524  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.798566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.298926  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.298502  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.799392  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.298514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.299002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.798510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.298587  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.798531  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.298834  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.798937  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.298568  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.798738  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.298745  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.799302  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.298517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.799058  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.299228  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.798518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.298540  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.799439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.298489  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.798827  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.298721  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.799210  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.298539  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.798525  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.298844  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.799320  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.298437  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.799300  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.299120  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.799319  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.298499  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.799357  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.298718  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.799264  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.299497  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.799177  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.298596  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.798469  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.298441  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.798552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.299123  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.798514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.299549  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.799361  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.798490  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.299082  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.798506  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.298576  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.799316  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.298516  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.798581  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.298604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.799331  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.298518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.799198  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.298513  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.799043  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.298601  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.798562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.298562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.798978  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.298537  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.798570  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.298807  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.799307  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.298910  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.798961  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.299359  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.799509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.299086  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.798511  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.298495  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.799378  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.799258  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.298589  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.799234  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.299117  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.798575  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.299185  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.799188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:30.799265  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:30.824550  485986 cri.go:89] found id: ""
	I1205 06:47:30.824564  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.824571  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:30.824577  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:30.824640  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:30.851389  485986 cri.go:89] found id: ""
	I1205 06:47:30.851404  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.851412  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:30.851416  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:30.851473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:30.877392  485986 cri.go:89] found id: ""
	I1205 06:47:30.877406  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.877421  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:30.877425  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:30.877481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:30.902294  485986 cri.go:89] found id: ""
	I1205 06:47:30.902308  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.902315  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:30.902321  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:30.902431  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:30.938796  485986 cri.go:89] found id: ""
	I1205 06:47:30.938810  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.938818  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:30.938823  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:30.938888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:30.965100  485986 cri.go:89] found id: ""
	I1205 06:47:30.965114  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.965121  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:30.965127  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:30.965183  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:30.992646  485986 cri.go:89] found id: ""
	I1205 06:47:30.992661  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.992668  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:30.992676  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:30.992686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:31.063641  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:31.063661  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:31.081045  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:31.081060  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:31.156684  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:31.156698  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:31.156710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:31.237470  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:31.237495  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:33.770808  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:33.780812  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:33.780872  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:33.805688  485986 cri.go:89] found id: ""
	I1205 06:47:33.805701  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.805714  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:33.805719  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:33.805779  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:33.832478  485986 cri.go:89] found id: ""
	I1205 06:47:33.832492  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.832499  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:33.832504  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:33.832560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:33.857669  485986 cri.go:89] found id: ""
	I1205 06:47:33.857683  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.857690  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:33.857695  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:33.857750  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:33.883403  485986 cri.go:89] found id: ""
	I1205 06:47:33.883417  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.883426  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:33.883431  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:33.883490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:33.914197  485986 cri.go:89] found id: ""
	I1205 06:47:33.914212  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.914219  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:33.914224  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:33.914295  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:33.944924  485986 cri.go:89] found id: ""
	I1205 06:47:33.944938  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.944945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:33.944950  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:33.945007  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:33.973129  485986 cri.go:89] found id: ""
	I1205 06:47:33.973143  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.973151  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:33.973158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:33.973169  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:34.044761  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:34.044781  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:34.061807  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:34.061823  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:34.130826  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:34.130840  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:34.130851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:34.209603  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:34.209627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:36.743254  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:36.753733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:36.753810  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:36.779655  485986 cri.go:89] found id: ""
	I1205 06:47:36.779669  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.779676  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:36.779681  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:36.779738  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:36.805062  485986 cri.go:89] found id: ""
	I1205 06:47:36.805076  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.805083  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:36.805089  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:36.805152  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:36.830864  485986 cri.go:89] found id: ""
	I1205 06:47:36.830878  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.830886  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:36.830891  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:36.830961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:36.855729  485986 cri.go:89] found id: ""
	I1205 06:47:36.855749  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.855757  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:36.855762  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:36.855819  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:36.881068  485986 cri.go:89] found id: ""
	I1205 06:47:36.881082  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.881089  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:36.881094  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:36.881157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:36.909354  485986 cri.go:89] found id: ""
	I1205 06:47:36.909367  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.909374  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:36.909380  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:36.909450  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:36.939352  485986 cri.go:89] found id: ""
	I1205 06:47:36.939375  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.939388  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:36.939396  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:36.939407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:36.954937  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:36.954953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:37.027384  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:37.027396  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:37.027407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:37.108980  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:37.109004  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:37.137603  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:37.137620  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:39.704971  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:39.715073  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:39.715153  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:39.742797  485986 cri.go:89] found id: ""
	I1205 06:47:39.742811  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.742818  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:39.742823  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:39.742882  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:39.767795  485986 cri.go:89] found id: ""
	I1205 06:47:39.767809  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.767816  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:39.767821  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:39.767888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:39.793002  485986 cri.go:89] found id: ""
	I1205 06:47:39.793016  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.793023  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:39.793028  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:39.793108  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:39.819015  485986 cri.go:89] found id: ""
	I1205 06:47:39.819029  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.819036  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:39.819042  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:39.819098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:39.844387  485986 cri.go:89] found id: ""
	I1205 06:47:39.844401  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.844408  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:39.844413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:39.844487  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:39.871624  485986 cri.go:89] found id: ""
	I1205 06:47:39.871638  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.871644  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:39.871650  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:39.871721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:39.897731  485986 cri.go:89] found id: ""
	I1205 06:47:39.897746  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.897754  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:39.897761  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:39.897771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:39.962937  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:39.962949  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:39.962960  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:40.058236  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:40.058256  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:40.094003  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:40.094022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:40.167448  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:40.167468  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.685167  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:42.695150  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:42.695206  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:42.719880  485986 cri.go:89] found id: ""
	I1205 06:47:42.719893  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.719901  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:42.719906  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:42.719965  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:42.748922  485986 cri.go:89] found id: ""
	I1205 06:47:42.748936  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.748943  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:42.748949  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:42.749005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:42.778525  485986 cri.go:89] found id: ""
	I1205 06:47:42.778539  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.778546  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:42.778551  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:42.778610  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:42.804447  485986 cri.go:89] found id: ""
	I1205 06:47:42.804461  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.804468  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:42.804473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:42.804530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:42.829834  485986 cri.go:89] found id: ""
	I1205 06:47:42.829848  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.829855  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:42.829861  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:42.829917  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:42.861917  485986 cri.go:89] found id: ""
	I1205 06:47:42.861937  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.861945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:42.861951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:42.862011  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:42.889024  485986 cri.go:89] found id: ""
	I1205 06:47:42.889047  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.889055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:42.889063  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:42.889073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:42.954442  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:42.954462  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.969793  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:42.969810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:43.044093  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:43.044113  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:43.044124  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:43.137811  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:43.137841  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.667791  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:45.677638  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:45.677697  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:45.702199  485986 cri.go:89] found id: ""
	I1205 06:47:45.702213  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.702220  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:45.702226  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:45.702284  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:45.726622  485986 cri.go:89] found id: ""
	I1205 06:47:45.726635  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.726642  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:45.726647  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:45.726703  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:45.752464  485986 cri.go:89] found id: ""
	I1205 06:47:45.752477  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.752484  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:45.752489  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:45.752551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:45.777756  485986 cri.go:89] found id: ""
	I1205 06:47:45.777770  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.777777  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:45.777783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:45.777838  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:45.803428  485986 cri.go:89] found id: ""
	I1205 06:47:45.803443  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.803459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:45.803464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:45.803524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:45.829175  485986 cri.go:89] found id: ""
	I1205 06:47:45.829189  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.829196  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:45.829201  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:45.829260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:45.855195  485986 cri.go:89] found id: ""
	I1205 06:47:45.855210  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.855217  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:45.855224  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:45.855235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.887261  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:45.887277  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:45.952635  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:45.952655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:45.968248  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:45.968265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:46.039946  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:46.039964  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:46.039975  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:48.631039  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:48.641171  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:48.641231  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:48.666360  485986 cri.go:89] found id: ""
	I1205 06:47:48.666402  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.666409  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:48.666417  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:48.666473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:48.694222  485986 cri.go:89] found id: ""
	I1205 06:47:48.694237  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.694243  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:48.694249  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:48.694304  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:48.718984  485986 cri.go:89] found id: ""
	I1205 06:47:48.718998  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.719005  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:48.719010  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:48.719067  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:48.744169  485986 cri.go:89] found id: ""
	I1205 06:47:48.744183  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.744190  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:48.744195  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:48.744253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:48.769240  485986 cri.go:89] found id: ""
	I1205 06:47:48.769263  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.769270  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:48.769275  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:48.769341  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:48.798956  485986 cri.go:89] found id: ""
	I1205 06:47:48.798971  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.798978  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:48.798983  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:48.799044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:48.826195  485986 cri.go:89] found id: ""
	I1205 06:47:48.826209  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.826216  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:48.826223  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:48.826233  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:48.892751  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:48.892771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:48.908154  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:48.908171  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:48.975550  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:48.975561  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:48.975572  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:49.057631  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:49.057651  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.594813  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:51.606364  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:51.606436  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:51.636377  485986 cri.go:89] found id: ""
	I1205 06:47:51.636391  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.636398  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:51.636403  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:51.636464  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:51.662318  485986 cri.go:89] found id: ""
	I1205 06:47:51.662332  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.662338  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:51.662349  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:51.662430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:51.688886  485986 cri.go:89] found id: ""
	I1205 06:47:51.688900  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.688907  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:51.688911  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:51.688969  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:51.717982  485986 cri.go:89] found id: ""
	I1205 06:47:51.717996  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.718003  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:51.718008  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:51.718066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:51.744748  485986 cri.go:89] found id: ""
	I1205 06:47:51.744762  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.744769  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:51.744783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:51.744840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:51.769889  485986 cri.go:89] found id: ""
	I1205 06:47:51.769903  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.769909  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:51.769915  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:51.769970  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:51.797012  485986 cri.go:89] found id: ""
	I1205 06:47:51.797026  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.797033  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:51.797040  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:51.797050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:51.871624  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:51.871643  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.901592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:51.901609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:51.968311  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:51.968333  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:51.983733  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:51.983748  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:52.057625  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.557903  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:54.568103  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:54.568164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:54.597086  485986 cri.go:89] found id: ""
	I1205 06:47:54.597100  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.597107  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:54.597112  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:54.597168  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:54.622728  485986 cri.go:89] found id: ""
	I1205 06:47:54.622743  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.622750  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:54.622756  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:54.622812  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:54.646642  485986 cri.go:89] found id: ""
	I1205 06:47:54.646656  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.646663  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:54.646668  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:54.646723  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:54.671271  485986 cri.go:89] found id: ""
	I1205 06:47:54.671286  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.671293  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:54.671299  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:54.671355  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:54.696124  485986 cri.go:89] found id: ""
	I1205 06:47:54.696138  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.696150  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:54.696155  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:54.696210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:54.720362  485986 cri.go:89] found id: ""
	I1205 06:47:54.720375  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.720383  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:54.720388  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:54.720442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:54.754080  485986 cri.go:89] found id: ""
	I1205 06:47:54.754094  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.754101  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:54.754108  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:54.754121  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:54.820260  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:54.820281  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:54.836201  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:54.836217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:54.909051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.909069  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:54.909080  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:54.984892  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:54.984912  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:57.516912  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:57.527633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:57.527698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:57.553823  485986 cri.go:89] found id: ""
	I1205 06:47:57.553837  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.553844  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:57.553851  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:57.553924  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:57.581054  485986 cri.go:89] found id: ""
	I1205 06:47:57.581068  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.581075  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:57.581080  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:57.581139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:57.606438  485986 cri.go:89] found id: ""
	I1205 06:47:57.606452  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.606460  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:57.606465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:57.606522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:57.632199  485986 cri.go:89] found id: ""
	I1205 06:47:57.632214  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.632220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:57.632226  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:57.632285  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:57.661439  485986 cri.go:89] found id: ""
	I1205 06:47:57.661454  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.661460  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:57.661465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:57.661521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:57.690916  485986 cri.go:89] found id: ""
	I1205 06:47:57.690930  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.690937  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:57.690943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:57.691003  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:57.716612  485986 cri.go:89] found id: ""
	I1205 06:47:57.716625  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.716632  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:57.716640  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:57.716650  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:57.787213  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:57.787235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:57.802362  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:57.802400  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:57.864350  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:57.864360  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:57.864370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:57.941328  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:57.941349  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:00.470137  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:00.483635  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:00.483706  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:00.512315  485986 cri.go:89] found id: ""
	I1205 06:48:00.512330  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.512338  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:00.512345  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:00.512409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:00.546442  485986 cri.go:89] found id: ""
	I1205 06:48:00.546457  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.546464  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:00.546469  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:00.546530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:00.573096  485986 cri.go:89] found id: ""
	I1205 06:48:00.573110  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.573123  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:00.573128  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:00.573187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:00.603254  485986 cri.go:89] found id: ""
	I1205 06:48:00.603268  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.603275  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:00.603280  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:00.603337  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:00.633558  485986 cri.go:89] found id: ""
	I1205 06:48:00.633572  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.633579  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:00.633586  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:00.633651  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:00.660790  485986 cri.go:89] found id: ""
	I1205 06:48:00.660804  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.660810  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:00.660816  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:00.660874  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:00.688773  485986 cri.go:89] found id: ""
	I1205 06:48:00.688786  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.688793  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:00.688800  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:00.688811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:00.753427  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:00.753450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:00.768529  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:00.768545  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:00.832028  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:00.832038  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:00.832048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:00.909664  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:00.909686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:03.440768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:03.451151  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:03.451210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:03.475560  485986 cri.go:89] found id: ""
	I1205 06:48:03.475574  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.475580  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:03.475586  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:03.475657  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:03.500266  485986 cri.go:89] found id: ""
	I1205 06:48:03.500280  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.500286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:03.500291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:03.500350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:03.528908  485986 cri.go:89] found id: ""
	I1205 06:48:03.528921  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.528928  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:03.528933  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:03.528993  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:03.556882  485986 cri.go:89] found id: ""
	I1205 06:48:03.556896  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.556903  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:03.556908  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:03.556963  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:03.582231  485986 cri.go:89] found id: ""
	I1205 06:48:03.582244  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.582252  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:03.582257  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:03.582315  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:03.611645  485986 cri.go:89] found id: ""
	I1205 06:48:03.611658  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.611665  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:03.611670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:03.611732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:03.637034  485986 cri.go:89] found id: ""
	I1205 06:48:03.637048  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.637055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:03.637062  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:03.637072  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:03.703283  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:03.703305  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:03.718166  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:03.718182  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:03.784612  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:03.784623  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:03.784645  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:03.865840  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:03.865871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.395611  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:06.406190  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:06.406253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:06.431958  485986 cri.go:89] found id: ""
	I1205 06:48:06.431972  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.431979  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:06.431984  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:06.432047  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:06.457302  485986 cri.go:89] found id: ""
	I1205 06:48:06.457317  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.457324  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:06.457329  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:06.457391  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:06.482778  485986 cri.go:89] found id: ""
	I1205 06:48:06.482793  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.482799  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:06.482805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:06.482860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:06.508293  485986 cri.go:89] found id: ""
	I1205 06:48:06.508307  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.508314  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:06.508319  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:06.508457  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:06.537089  485986 cri.go:89] found id: ""
	I1205 06:48:06.537103  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.537110  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:06.537115  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:06.537175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:06.564731  485986 cri.go:89] found id: ""
	I1205 06:48:06.564745  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.564752  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:06.564759  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:06.564815  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:06.590872  485986 cri.go:89] found id: ""
	I1205 06:48:06.590887  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.590895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:06.590903  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:06.590914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:06.658481  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:06.658495  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:06.658505  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:06.733300  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:06.733322  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.768591  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:06.768606  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:06.834509  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:06.834529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.350677  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:09.360723  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:09.360783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:09.388219  485986 cri.go:89] found id: ""
	I1205 06:48:09.388232  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.388239  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:09.388244  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:09.388306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:09.416992  485986 cri.go:89] found id: ""
	I1205 06:48:09.417007  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.417013  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:09.417019  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:09.417076  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:09.446304  485986 cri.go:89] found id: ""
	I1205 06:48:09.446318  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.446325  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:09.446330  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:09.446409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:09.472368  485986 cri.go:89] found id: ""
	I1205 06:48:09.472383  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.472390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:09.472395  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:09.472474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:09.497702  485986 cri.go:89] found id: ""
	I1205 06:48:09.497716  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.497722  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:09.497727  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:09.497783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:09.525679  485986 cri.go:89] found id: ""
	I1205 06:48:09.525693  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.525700  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:09.525706  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:09.525765  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:09.552628  485986 cri.go:89] found id: ""
	I1205 06:48:09.552643  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.552650  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:09.552657  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:09.552667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:09.618085  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:09.618105  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.633067  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:09.633084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:09.696615  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:09.696626  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:09.696637  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:09.772055  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:09.772074  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.303940  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:12.314229  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:12.314298  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:12.348459  485986 cri.go:89] found id: ""
	I1205 06:48:12.348473  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.348480  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:12.348485  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:12.348543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:12.373284  485986 cri.go:89] found id: ""
	I1205 06:48:12.373299  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.373306  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:12.373311  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:12.373375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:12.398539  485986 cri.go:89] found id: ""
	I1205 06:48:12.398559  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.398566  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:12.398571  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:12.398635  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:12.423138  485986 cri.go:89] found id: ""
	I1205 06:48:12.423151  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.423158  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:12.423163  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:12.423223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:12.447667  485986 cri.go:89] found id: ""
	I1205 06:48:12.447680  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.447688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:12.447692  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:12.447751  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:12.472343  485986 cri.go:89] found id: ""
	I1205 06:48:12.472357  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.472364  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:12.472369  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:12.472425  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:12.497076  485986 cri.go:89] found id: ""
	I1205 06:48:12.497089  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.497096  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:12.497102  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:12.497112  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:12.574451  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:12.574470  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.610910  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:12.610926  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:12.678117  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:12.678135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:12.692476  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:12.692492  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:12.758359  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.258636  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:15.270043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:15.270103  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:15.304750  485986 cri.go:89] found id: ""
	I1205 06:48:15.304764  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.304771  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:15.304776  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:15.304832  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:15.344151  485986 cri.go:89] found id: ""
	I1205 06:48:15.344165  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.344172  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:15.344182  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:15.344249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:15.371527  485986 cri.go:89] found id: ""
	I1205 06:48:15.371541  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.371548  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:15.371553  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:15.371618  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:15.403495  485986 cri.go:89] found id: ""
	I1205 06:48:15.403508  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.403515  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:15.403521  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:15.403581  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:15.429409  485986 cri.go:89] found id: ""
	I1205 06:48:15.429424  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.429431  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:15.429436  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:15.429501  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:15.459234  485986 cri.go:89] found id: ""
	I1205 06:48:15.459248  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.459257  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:15.459263  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:15.459320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:15.488886  485986 cri.go:89] found id: ""
	I1205 06:48:15.488900  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.488907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:15.488915  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:15.488925  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:15.556219  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:15.556239  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:15.571562  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:15.571579  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:15.635494  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.635504  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:15.635514  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:15.717719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:15.717740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.253466  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:18.263430  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:18.263491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:18.305028  485986 cri.go:89] found id: ""
	I1205 06:48:18.305042  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.305049  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:18.305054  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:18.305111  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:18.332689  485986 cri.go:89] found id: ""
	I1205 06:48:18.332702  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.332709  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:18.332715  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:18.332770  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:18.360205  485986 cri.go:89] found id: ""
	I1205 06:48:18.360220  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.360227  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:18.360232  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:18.360291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:18.385479  485986 cri.go:89] found id: ""
	I1205 06:48:18.385493  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.385500  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:18.385505  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:18.385560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:18.413258  485986 cri.go:89] found id: ""
	I1205 06:48:18.413272  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.413279  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:18.413286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:18.413348  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:18.439018  485986 cri.go:89] found id: ""
	I1205 06:48:18.439032  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.439039  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:18.439044  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:18.439099  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:18.465311  485986 cri.go:89] found id: ""
	I1205 06:48:18.465324  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.465341  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:18.465348  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:18.465359  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:18.479885  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:18.479902  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:18.543997  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:18.544007  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:18.544018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:18.620924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:18.620948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.655034  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:18.655050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.222770  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:21.233411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:21.233478  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:21.263287  485986 cri.go:89] found id: ""
	I1205 06:48:21.263302  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.263309  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:21.263315  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:21.263379  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:21.300915  485986 cri.go:89] found id: ""
	I1205 06:48:21.300929  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.300936  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:21.300941  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:21.301005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:21.328975  485986 cri.go:89] found id: ""
	I1205 06:48:21.328989  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.328999  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:21.329004  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:21.329061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:21.358828  485986 cri.go:89] found id: ""
	I1205 06:48:21.358842  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.358849  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:21.358854  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:21.358914  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:21.384401  485986 cri.go:89] found id: ""
	I1205 06:48:21.384422  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.384429  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:21.384434  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:21.384491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:21.409705  485986 cri.go:89] found id: ""
	I1205 06:48:21.409719  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.409726  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:21.409732  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:21.409791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:21.437633  485986 cri.go:89] found id: ""
	I1205 06:48:21.437650  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.437658  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:21.437665  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:21.437675  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:21.515785  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:21.515808  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:21.549019  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:21.549035  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.620027  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:21.620048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:21.635622  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:21.635638  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:21.710252  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.210507  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:24.221002  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:24.221061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:24.246259  485986 cri.go:89] found id: ""
	I1205 06:48:24.246273  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.246280  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:24.246285  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:24.246350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:24.274723  485986 cri.go:89] found id: ""
	I1205 06:48:24.274736  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.274743  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:24.274749  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:24.274807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:24.312165  485986 cri.go:89] found id: ""
	I1205 06:48:24.312179  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.312186  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:24.312191  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:24.312248  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:24.351913  485986 cri.go:89] found id: ""
	I1205 06:48:24.351927  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.351934  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:24.351939  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:24.351995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:24.377944  485986 cri.go:89] found id: ""
	I1205 06:48:24.377958  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.377966  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:24.377971  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:24.378029  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:24.403127  485986 cri.go:89] found id: ""
	I1205 06:48:24.403142  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.403149  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:24.403154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:24.403211  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:24.428745  485986 cri.go:89] found id: ""
	I1205 06:48:24.428760  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.428777  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:24.428785  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:24.428795  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:24.495838  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:24.495860  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:24.511294  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:24.511309  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:24.577637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.577647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:24.577658  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:24.664395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:24.664422  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:27.196552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:27.206670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:27.206729  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:27.232859  485986 cri.go:89] found id: ""
	I1205 06:48:27.232873  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.232880  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:27.232885  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:27.232944  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:27.261077  485986 cri.go:89] found id: ""
	I1205 06:48:27.261091  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.261098  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:27.261104  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:27.261157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:27.299035  485986 cri.go:89] found id: ""
	I1205 06:48:27.299049  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.299056  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:27.299061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:27.299117  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:27.325080  485986 cri.go:89] found id: ""
	I1205 06:48:27.325094  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.325100  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:27.325105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:27.325165  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:27.355194  485986 cri.go:89] found id: ""
	I1205 06:48:27.355208  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.355215  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:27.355220  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:27.355281  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:27.380260  485986 cri.go:89] found id: ""
	I1205 06:48:27.380274  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.380281  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:27.380286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:27.380340  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:27.404746  485986 cri.go:89] found id: ""
	I1205 06:48:27.404760  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.404767  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:27.404774  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:27.404784  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:27.471214  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:27.471234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:27.486196  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:27.486213  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:27.549013  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:27.549023  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:27.549034  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:27.626719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:27.626740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.157779  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:30.168828  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:30.168888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:30.196472  485986 cri.go:89] found id: ""
	I1205 06:48:30.196487  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.196494  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:30.196500  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:30.196561  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:30.222435  485986 cri.go:89] found id: ""
	I1205 06:48:30.222449  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.222456  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:30.222463  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:30.222521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:30.252893  485986 cri.go:89] found id: ""
	I1205 06:48:30.252907  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.252914  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:30.252919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:30.252979  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:30.293703  485986 cri.go:89] found id: ""
	I1205 06:48:30.293717  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.293724  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:30.293729  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:30.293791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:30.323711  485986 cri.go:89] found id: ""
	I1205 06:48:30.323724  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.323731  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:30.323746  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:30.323804  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:30.355817  485986 cri.go:89] found id: ""
	I1205 06:48:30.355831  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.355838  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:30.355844  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:30.355905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:30.384820  485986 cri.go:89] found id: ""
	I1205 06:48:30.384834  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.384850  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:30.384858  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:30.384869  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:30.400554  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:30.400571  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:30.462509  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:30.462519  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:30.462529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:30.539861  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:30.539884  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.572611  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:30.572627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.142900  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:33.153456  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:33.153522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:33.178913  485986 cri.go:89] found id: ""
	I1205 06:48:33.178926  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.178933  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:33.178939  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:33.178994  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:33.204173  485986 cri.go:89] found id: ""
	I1205 06:48:33.204187  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.204195  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:33.204200  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:33.204260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:33.231661  485986 cri.go:89] found id: ""
	I1205 06:48:33.231675  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.231688  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:33.231693  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:33.231749  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:33.256100  485986 cri.go:89] found id: ""
	I1205 06:48:33.256113  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.256120  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:33.256125  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:33.256180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:33.288692  485986 cri.go:89] found id: ""
	I1205 06:48:33.288706  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.288713  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:33.288718  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:33.288778  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:33.322902  485986 cri.go:89] found id: ""
	I1205 06:48:33.322916  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.322931  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:33.322936  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:33.322995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:33.354832  485986 cri.go:89] found id: ""
	I1205 06:48:33.354846  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.354853  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:33.354861  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:33.354871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.419523  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:33.419542  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:33.436533  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:33.436549  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:33.500717  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:33.500727  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:33.500744  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:33.576166  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:33.576187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.103891  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:36.114026  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:36.114086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:36.138404  485986 cri.go:89] found id: ""
	I1205 06:48:36.138419  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.138426  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:36.138432  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:36.138490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:36.165135  485986 cri.go:89] found id: ""
	I1205 06:48:36.165149  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.165156  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:36.165161  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:36.165218  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:36.190238  485986 cri.go:89] found id: ""
	I1205 06:48:36.190252  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.190259  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:36.190264  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:36.190323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:36.216962  485986 cri.go:89] found id: ""
	I1205 06:48:36.216975  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.216982  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:36.216987  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:36.217043  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:36.241075  485986 cri.go:89] found id: ""
	I1205 06:48:36.241089  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.241096  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:36.241107  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:36.241174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:36.267257  485986 cri.go:89] found id: ""
	I1205 06:48:36.267272  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.267278  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:36.267284  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:36.267350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:36.293288  485986 cri.go:89] found id: ""
	I1205 06:48:36.293310  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.293320  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:36.293327  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:36.293338  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:36.363749  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:36.363759  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:36.363769  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:36.438180  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:36.438203  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.466903  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:36.466919  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:36.532968  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:36.532989  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.048421  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:39.059045  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:39.059109  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:39.083511  485986 cri.go:89] found id: ""
	I1205 06:48:39.083526  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.083532  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:39.083537  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:39.083599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:39.107712  485986 cri.go:89] found id: ""
	I1205 06:48:39.107725  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.107732  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:39.107736  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:39.107793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:39.132566  485986 cri.go:89] found id: ""
	I1205 06:48:39.132580  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.132588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:39.132593  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:39.132650  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:39.161417  485986 cri.go:89] found id: ""
	I1205 06:48:39.161431  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.161438  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:39.161443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:39.161511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:39.186314  485986 cri.go:89] found id: ""
	I1205 06:48:39.186328  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.186335  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:39.186340  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:39.186428  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:39.210957  485986 cri.go:89] found id: ""
	I1205 06:48:39.210971  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.210980  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:39.210986  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:39.211044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:39.236120  485986 cri.go:89] found id: ""
	I1205 06:48:39.236134  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.236141  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:39.236148  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:39.236159  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.250894  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:39.250911  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:39.334545  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:39.334556  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:39.334567  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:39.413949  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:39.413970  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:39.444354  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:39.444370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.015174  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:42.026667  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:42.026732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:42.056643  485986 cri.go:89] found id: ""
	I1205 06:48:42.056658  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.056666  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:42.056672  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:42.056732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:42.084714  485986 cri.go:89] found id: ""
	I1205 06:48:42.084731  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.084745  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:42.084750  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:42.084817  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:42.115735  485986 cri.go:89] found id: ""
	I1205 06:48:42.115750  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.115757  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:42.115763  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:42.115828  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:42.148687  485986 cri.go:89] found id: ""
	I1205 06:48:42.148703  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.148711  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:42.148717  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:42.148783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:42.183060  485986 cri.go:89] found id: ""
	I1205 06:48:42.183076  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.183084  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:42.183089  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:42.183162  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:42.216582  485986 cri.go:89] found id: ""
	I1205 06:48:42.216598  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.216606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:42.216612  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:42.216684  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:42.247171  485986 cri.go:89] found id: ""
	I1205 06:48:42.247186  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.247193  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:42.247201  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:42.247217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:42.285459  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:42.285487  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.355504  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:42.355523  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:42.370693  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:42.370709  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:42.438568  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:42.438578  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:42.438588  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.014965  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:45.054270  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:45.054339  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:45.114057  485986 cri.go:89] found id: ""
	I1205 06:48:45.114075  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.114090  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:45.114097  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:45.114172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:45.165369  485986 cri.go:89] found id: ""
	I1205 06:48:45.165394  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.165402  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:45.165408  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:45.165494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:45.212325  485986 cri.go:89] found id: ""
	I1205 06:48:45.212342  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.212349  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:45.212355  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:45.212424  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:45.254096  485986 cri.go:89] found id: ""
	I1205 06:48:45.254114  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.254127  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:45.254134  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:45.254294  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:45.305666  485986 cri.go:89] found id: ""
	I1205 06:48:45.305681  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.305688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:45.305694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:45.305753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:45.347701  485986 cri.go:89] found id: ""
	I1205 06:48:45.347715  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.347721  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:45.347726  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:45.347793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:45.373745  485986 cri.go:89] found id: ""
	I1205 06:48:45.373760  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.373775  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:45.373782  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:45.373793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:45.439756  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:45.439776  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:45.454781  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:45.454797  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:45.521815  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:45.521826  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:45.521838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.602427  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:45.602455  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.134541  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:48.144703  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:48.144768  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:48.169929  485986 cri.go:89] found id: ""
	I1205 06:48:48.169942  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.169949  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:48.169954  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:48.170014  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:48.194815  485986 cri.go:89] found id: ""
	I1205 06:48:48.194828  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.194835  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:48.194840  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:48.194898  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:48.220017  485986 cri.go:89] found id: ""
	I1205 06:48:48.220031  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.220038  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:48.220043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:48.220101  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:48.249449  485986 cri.go:89] found id: ""
	I1205 06:48:48.249462  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.249470  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:48.249481  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:48.249552  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:48.284921  485986 cri.go:89] found id: ""
	I1205 06:48:48.284935  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.284942  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:48.284947  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:48.285006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:48.315138  485986 cri.go:89] found id: ""
	I1205 06:48:48.315152  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.315159  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:48.315164  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:48.315223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:48.347265  485986 cri.go:89] found id: ""
	I1205 06:48:48.347279  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.347286  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:48.347293  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:48.347304  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.375662  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:48.375678  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:48.440841  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:48.440863  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:48.456128  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:48.456144  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:48.523196  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:48.523206  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:48.523216  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.100852  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:51.111413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:51.111475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:51.139392  485986 cri.go:89] found id: ""
	I1205 06:48:51.139406  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.139414  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:51.139419  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:51.139483  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:51.167265  485986 cri.go:89] found id: ""
	I1205 06:48:51.167279  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.167286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:51.167291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:51.167347  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:51.192337  485986 cri.go:89] found id: ""
	I1205 06:48:51.192351  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.192358  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:51.192363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:51.192419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:51.217599  485986 cri.go:89] found id: ""
	I1205 06:48:51.217614  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.217621  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:51.217627  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:51.217683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:51.242555  485986 cri.go:89] found id: ""
	I1205 06:48:51.242568  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.242576  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:51.242580  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:51.242641  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:51.270447  485986 cri.go:89] found id: ""
	I1205 06:48:51.270462  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.270469  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:51.270474  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:51.270551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:51.300340  485986 cri.go:89] found id: ""
	I1205 06:48:51.300353  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.300360  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:51.300375  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:51.300385  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:51.373583  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:51.373604  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:51.388609  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:51.388624  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:51.449562  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:51.449572  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:51.449584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.523352  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:51.523373  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.052404  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:54.065168  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:54.065280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:54.097086  485986 cri.go:89] found id: ""
	I1205 06:48:54.097102  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.097109  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:54.097114  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:54.097173  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:54.128973  485986 cri.go:89] found id: ""
	I1205 06:48:54.128988  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.128995  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:54.129000  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:54.129066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:54.163279  485986 cri.go:89] found id: ""
	I1205 06:48:54.163294  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.163301  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:54.163305  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:54.163363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:54.200034  485986 cri.go:89] found id: ""
	I1205 06:48:54.200049  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.200056  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:54.200061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:54.200119  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:54.232483  485986 cri.go:89] found id: ""
	I1205 06:48:54.232498  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.232504  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:54.232509  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:54.232572  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:54.256577  485986 cri.go:89] found id: ""
	I1205 06:48:54.256598  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.256606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:54.256611  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:54.256673  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:54.288762  485986 cri.go:89] found id: ""
	I1205 06:48:54.288788  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.288796  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:54.288804  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:54.288815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:54.368738  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:54.368758  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.395932  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:54.395948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:54.464047  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:54.464066  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:54.479400  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:54.479416  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:54.546819  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.047675  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:57.058076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:57.058143  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:57.082332  485986 cri.go:89] found id: ""
	I1205 06:48:57.082347  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.082355  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:57.082360  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:57.082442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:57.108051  485986 cri.go:89] found id: ""
	I1205 06:48:57.108071  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.108078  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:57.108083  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:57.108139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:57.137107  485986 cri.go:89] found id: ""
	I1205 06:48:57.137129  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.137136  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:57.137141  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:57.137198  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:57.163240  485986 cri.go:89] found id: ""
	I1205 06:48:57.163272  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.163279  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:57.163285  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:57.163352  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:57.192699  485986 cri.go:89] found id: ""
	I1205 06:48:57.192725  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.192735  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:57.192740  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:57.192807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:57.220916  485986 cri.go:89] found id: ""
	I1205 06:48:57.220931  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.220938  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:57.220943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:57.221010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:57.248028  485986 cri.go:89] found id: ""
	I1205 06:48:57.248042  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.248049  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:57.248057  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:57.248068  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:57.262955  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:57.262971  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:57.355127  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.355141  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:57.355151  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:57.433116  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:57.433135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:57.464587  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:57.464603  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.033434  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:00.083145  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:00.083219  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:00.188573  485986 cri.go:89] found id: ""
	I1205 06:49:00.188591  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.188607  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:00.188613  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:00.188683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:00.262241  485986 cri.go:89] found id: ""
	I1205 06:49:00.262258  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.262265  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:00.262271  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:00.262346  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:00.303849  485986 cri.go:89] found id: ""
	I1205 06:49:00.303866  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.303875  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:00.303881  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:00.303981  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:00.349047  485986 cri.go:89] found id: ""
	I1205 06:49:00.349063  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.349071  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:00.349076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:00.349147  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:00.379299  485986 cri.go:89] found id: ""
	I1205 06:49:00.379317  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.379325  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:00.379332  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:00.379419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:00.409559  485986 cri.go:89] found id: ""
	I1205 06:49:00.409575  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.409582  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:00.409589  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:00.409656  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:00.439884  485986 cri.go:89] found id: ""
	I1205 06:49:00.439899  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.439907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:00.439916  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:00.439933  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.508652  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:00.508672  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:00.524482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:00.524504  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:00.586066  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:00.586076  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:00.586087  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:00.663208  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:00.663229  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:03.193638  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:03.204025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:03.204086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:03.228565  485986 cri.go:89] found id: ""
	I1205 06:49:03.228579  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.228586  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:03.228592  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:03.228649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:03.254850  485986 cri.go:89] found id: ""
	I1205 06:49:03.254864  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.254871  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:03.254876  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:03.254937  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:03.289088  485986 cri.go:89] found id: ""
	I1205 06:49:03.289101  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.289108  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:03.289113  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:03.289194  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:03.322876  485986 cri.go:89] found id: ""
	I1205 06:49:03.322891  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.322905  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:03.322910  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:03.322971  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:03.352868  485986 cri.go:89] found id: ""
	I1205 06:49:03.352883  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.352890  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:03.352895  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:03.352957  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:03.381474  485986 cri.go:89] found id: ""
	I1205 06:49:03.381495  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.381502  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:03.381508  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:03.381569  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:03.410037  485986 cri.go:89] found id: ""
	I1205 06:49:03.410051  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.410058  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:03.410071  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:03.410081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:03.479009  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:03.479028  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:03.493685  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:03.493702  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:03.561170  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:03.561179  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:03.561190  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:03.638291  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:03.638315  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.175002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:06.185259  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:06.185319  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:06.215092  485986 cri.go:89] found id: ""
	I1205 06:49:06.215106  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.215113  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:06.215119  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:06.215175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:06.245195  485986 cri.go:89] found id: ""
	I1205 06:49:06.245209  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.245216  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:06.245221  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:06.245283  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:06.271319  485986 cri.go:89] found id: ""
	I1205 06:49:06.271333  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.271340  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:06.271346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:06.271404  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:06.300132  485986 cri.go:89] found id: ""
	I1205 06:49:06.300146  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.300152  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:06.300158  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:06.300216  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:06.337931  485986 cri.go:89] found id: ""
	I1205 06:49:06.337945  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.337952  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:06.337957  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:06.338017  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:06.365963  485986 cri.go:89] found id: ""
	I1205 06:49:06.365978  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.365985  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:06.365991  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:06.366048  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:06.396366  485986 cri.go:89] found id: ""
	I1205 06:49:06.396382  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.396389  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:06.396397  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:06.396410  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.424940  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:06.424956  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:06.490847  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:06.490864  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:06.506209  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:06.506225  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:06.572331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:06.572342  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:06.572352  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.157509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:09.167469  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:09.167529  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:09.192289  485986 cri.go:89] found id: ""
	I1205 06:49:09.192304  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.192311  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:09.192316  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:09.192375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:09.217082  485986 cri.go:89] found id: ""
	I1205 06:49:09.217096  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.217103  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:09.217108  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:09.217167  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:09.242357  485986 cri.go:89] found id: ""
	I1205 06:49:09.242371  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.242412  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:09.242417  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:09.242474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:09.267197  485986 cri.go:89] found id: ""
	I1205 06:49:09.267211  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.267218  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:09.267223  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:09.267282  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:09.302740  485986 cri.go:89] found id: ""
	I1205 06:49:09.302754  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.302761  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:09.302766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:09.302824  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:09.338883  485986 cri.go:89] found id: ""
	I1205 06:49:09.338910  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.338917  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:09.338923  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:09.338988  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:09.365834  485986 cri.go:89] found id: ""
	I1205 06:49:09.365848  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.365855  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:09.365862  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:09.365872  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:09.433408  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:09.433430  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:09.448763  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:09.448785  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:09.510400  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:09.510413  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:09.510424  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.589135  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:09.589155  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:12.118439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:12.128584  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:12.128642  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:12.153045  485986 cri.go:89] found id: ""
	I1205 06:49:12.153059  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.153066  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:12.153071  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:12.153138  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:12.181785  485986 cri.go:89] found id: ""
	I1205 06:49:12.181798  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.181805  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:12.181810  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:12.181867  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:12.208813  485986 cri.go:89] found id: ""
	I1205 06:49:12.208827  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.208834  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:12.208845  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:12.208903  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:12.234917  485986 cri.go:89] found id: ""
	I1205 06:49:12.234931  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.234938  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:12.234943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:12.235004  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:12.260438  485986 cri.go:89] found id: ""
	I1205 06:49:12.260452  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.260459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:12.260464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:12.260531  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:12.296968  485986 cri.go:89] found id: ""
	I1205 06:49:12.296981  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.296988  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:12.296994  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:12.297050  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:12.333915  485986 cri.go:89] found id: ""
	I1205 06:49:12.333929  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.333936  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:12.333943  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:12.333953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:12.406977  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:12.406998  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:12.422290  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:12.422306  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:12.488646  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:12.488656  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:12.488666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:12.564028  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:12.564050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.095313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:15.105802  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:15.105864  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:15.133035  485986 cri.go:89] found id: ""
	I1205 06:49:15.133049  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.133057  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:15.133062  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:15.133118  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:15.158425  485986 cri.go:89] found id: ""
	I1205 06:49:15.158439  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.158446  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:15.158451  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:15.158507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:15.183550  485986 cri.go:89] found id: ""
	I1205 06:49:15.183564  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.183571  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:15.183576  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:15.183637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:15.209390  485986 cri.go:89] found id: ""
	I1205 06:49:15.209405  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.209413  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:15.209418  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:15.209481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:15.234806  485986 cri.go:89] found id: ""
	I1205 06:49:15.234820  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.234828  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:15.234833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:15.234893  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:15.260606  485986 cri.go:89] found id: ""
	I1205 06:49:15.260621  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.260628  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:15.260633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:15.260689  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:15.291752  485986 cri.go:89] found id: ""
	I1205 06:49:15.291766  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.291773  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:15.291782  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:15.291793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:15.308482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:15.308499  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:15.380232  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:15.380242  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:15.380253  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:15.456924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:15.456947  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.486075  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:15.486091  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.055175  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:18.065657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:18.065716  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:18.092418  485986 cri.go:89] found id: ""
	I1205 06:49:18.092432  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.092440  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:18.092445  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:18.092504  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:18.119095  485986 cri.go:89] found id: ""
	I1205 06:49:18.119109  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.119116  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:18.119120  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:18.119174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:18.158317  485986 cri.go:89] found id: ""
	I1205 06:49:18.158331  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.158338  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:18.158343  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:18.158435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:18.182920  485986 cri.go:89] found id: ""
	I1205 06:49:18.182934  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.182941  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:18.182946  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:18.183006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:18.209415  485986 cri.go:89] found id: ""
	I1205 06:49:18.209430  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.209438  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:18.209443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:18.209512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:18.236631  485986 cri.go:89] found id: ""
	I1205 06:49:18.236644  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.236651  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:18.236656  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:18.236713  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:18.262726  485986 cri.go:89] found id: ""
	I1205 06:49:18.262740  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.262747  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:18.262754  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:18.262765  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.339996  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:18.340018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:18.358676  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:18.358696  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:18.426638  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:18.426647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:18.426706  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:18.504263  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:18.504284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:21.036369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:21.046428  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:21.046488  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:21.071147  485986 cri.go:89] found id: ""
	I1205 06:49:21.071161  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.071168  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:21.071173  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:21.071235  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:21.095397  485986 cri.go:89] found id: ""
	I1205 06:49:21.095412  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.095421  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:21.095426  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:21.095485  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:21.119759  485986 cri.go:89] found id: ""
	I1205 06:49:21.119773  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.119780  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:21.119786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:21.119850  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:21.144972  485986 cri.go:89] found id: ""
	I1205 06:49:21.144986  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.144993  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:21.144998  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:21.145054  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:21.170022  485986 cri.go:89] found id: ""
	I1205 06:49:21.170035  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.170042  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:21.170047  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:21.170104  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:21.198867  485986 cri.go:89] found id: ""
	I1205 06:49:21.198881  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.198887  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:21.198893  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:21.198948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:21.224547  485986 cri.go:89] found id: ""
	I1205 06:49:21.224561  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.224568  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:21.224575  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:21.224585  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:21.291060  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:21.291081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:21.308799  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:21.308815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:21.380254  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:21.380264  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:21.380275  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:21.456817  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:21.456838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:23.986703  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:23.996959  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:23.997028  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:24.026421  485986 cri.go:89] found id: ""
	I1205 06:49:24.026435  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.026443  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:24.026450  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:24.026512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:24.056568  485986 cri.go:89] found id: ""
	I1205 06:49:24.056582  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.056589  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:24.056595  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:24.056654  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:24.082518  485986 cri.go:89] found id: ""
	I1205 06:49:24.082532  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.082539  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:24.082544  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:24.082605  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:24.108752  485986 cri.go:89] found id: ""
	I1205 06:49:24.108766  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.108783  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:24.108788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:24.108854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:24.142101  485986 cri.go:89] found id: ""
	I1205 06:49:24.142133  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.142140  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:24.142146  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:24.142214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:24.169035  485986 cri.go:89] found id: ""
	I1205 06:49:24.169050  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.169057  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:24.169067  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:24.169139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:24.194140  485986 cri.go:89] found id: ""
	I1205 06:49:24.194154  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.194161  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:24.194169  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:24.194179  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:24.269020  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:24.269042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:24.319041  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:24.319057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:24.402423  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:24.402446  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:24.418669  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:24.418687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:24.486837  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:26.988496  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:26.998567  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:26.998632  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:27.030118  485986 cri.go:89] found id: ""
	I1205 06:49:27.030131  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.030138  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:27.030144  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:27.030200  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:27.057209  485986 cri.go:89] found id: ""
	I1205 06:49:27.057224  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.057230  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:27.057236  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:27.057291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:27.083393  485986 cri.go:89] found id: ""
	I1205 06:49:27.083408  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.083415  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:27.083420  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:27.083480  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:27.108369  485986 cri.go:89] found id: ""
	I1205 06:49:27.108383  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.108390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:27.108394  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:27.108454  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:27.136631  485986 cri.go:89] found id: ""
	I1205 06:49:27.136645  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.136653  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:27.136659  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:27.136726  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:27.163262  485986 cri.go:89] found id: ""
	I1205 06:49:27.163277  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.163286  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:27.163294  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:27.163353  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:27.188133  485986 cri.go:89] found id: ""
	I1205 06:49:27.188152  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.188160  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:27.188167  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:27.188177  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:27.252259  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:27.252270  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:27.252280  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:27.330222  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:27.330243  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:27.360158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:27.360174  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:27.433608  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:27.433628  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:29.949566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:29.960768  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:29.960834  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:29.986155  485986 cri.go:89] found id: ""
	I1205 06:49:29.986169  485986 logs.go:282] 0 containers: []
	W1205 06:49:29.986176  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:29.986181  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:29.986241  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:30.063119  485986 cri.go:89] found id: ""
	I1205 06:49:30.063137  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.063144  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:30.063163  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:30.063243  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:30.093759  485986 cri.go:89] found id: ""
	I1205 06:49:30.093774  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.093782  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:30.093788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:30.093860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:30.123430  485986 cri.go:89] found id: ""
	I1205 06:49:30.123452  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.123460  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:30.123465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:30.123554  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:30.151722  485986 cri.go:89] found id: ""
	I1205 06:49:30.151744  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.151752  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:30.151758  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:30.151820  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:30.186802  485986 cri.go:89] found id: ""
	I1205 06:49:30.186831  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.186852  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:30.186859  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:30.186929  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:30.213270  485986 cri.go:89] found id: ""
	I1205 06:49:30.213293  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.213301  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:30.213309  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:30.213320  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:30.279872  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:30.279893  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:30.296737  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:30.296759  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:30.374429  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:30.374439  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:30.374450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:30.450678  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:30.450701  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:32.984051  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:32.993990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:32.994049  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:33.020636  485986 cri.go:89] found id: ""
	I1205 06:49:33.020650  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.020657  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:33.020663  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:33.020719  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:33.049013  485986 cri.go:89] found id: ""
	I1205 06:49:33.049027  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.049034  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:33.049039  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:33.049098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:33.078567  485986 cri.go:89] found id: ""
	I1205 06:49:33.078581  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.078588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:33.078594  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:33.078652  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:33.103212  485986 cri.go:89] found id: ""
	I1205 06:49:33.103226  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.103233  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:33.103238  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:33.103293  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:33.127983  485986 cri.go:89] found id: ""
	I1205 06:49:33.127997  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.128004  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:33.128030  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:33.128085  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:33.153777  485986 cri.go:89] found id: ""
	I1205 06:49:33.153792  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.153799  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:33.153805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:33.153863  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:33.178536  485986 cri.go:89] found id: ""
	I1205 06:49:33.178550  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.178557  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:33.178565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:33.178576  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:33.244570  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:33.244594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:33.259835  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:33.259851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:33.338788  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:33.338799  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:33.338810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:33.425207  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:33.425236  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:35.956397  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:35.966480  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:35.966543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:35.995353  485986 cri.go:89] found id: ""
	I1205 06:49:35.995367  485986 logs.go:282] 0 containers: []
	W1205 06:49:35.995374  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:35.995378  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:35.995435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:36.024388  485986 cri.go:89] found id: ""
	I1205 06:49:36.024403  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.024410  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:36.024415  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:36.024477  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:36.051022  485986 cri.go:89] found id: ""
	I1205 06:49:36.051036  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.051054  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:36.051059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:36.051124  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:36.076096  485986 cri.go:89] found id: ""
	I1205 06:49:36.076110  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.076117  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:36.076123  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:36.076180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:36.105105  485986 cri.go:89] found id: ""
	I1205 06:49:36.105119  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.105127  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:36.105131  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:36.105187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:36.131094  485986 cri.go:89] found id: ""
	I1205 06:49:36.131107  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.131114  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:36.131120  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:36.131180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:36.160327  485986 cri.go:89] found id: ""
	I1205 06:49:36.160342  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.160349  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:36.160357  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:36.160367  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:36.175190  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:36.175205  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:36.236428  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:36.236479  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:36.236489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:36.320584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:36.320608  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:36.354951  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:36.354968  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:38.924529  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:38.934948  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:38.935008  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:38.961612  485986 cri.go:89] found id: ""
	I1205 06:49:38.961626  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.961633  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:38.961638  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:38.961699  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:38.987542  485986 cri.go:89] found id: ""
	I1205 06:49:38.987562  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.987569  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:38.987574  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:38.987637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:39.017388  485986 cri.go:89] found id: ""
	I1205 06:49:39.017402  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.017409  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:39.017414  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:39.017475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:39.043798  485986 cri.go:89] found id: ""
	I1205 06:49:39.043813  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.043821  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:39.043826  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:39.043883  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:39.072134  485986 cri.go:89] found id: ""
	I1205 06:49:39.072148  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.072155  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:39.072160  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:39.072214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:39.097127  485986 cri.go:89] found id: ""
	I1205 06:49:39.097141  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.097148  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:39.097154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:39.097215  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:39.125406  485986 cri.go:89] found id: ""
	I1205 06:49:39.125420  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.125427  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:39.125434  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:39.125447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:39.191762  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:39.191782  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:39.206972  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:39.206987  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:39.274830  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:39.274841  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:39.274851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:39.365052  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:39.365073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:41.896143  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:41.906833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:41.906906  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:41.934416  485986 cri.go:89] found id: ""
	I1205 06:49:41.934430  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.934437  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:41.934442  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:41.934498  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:41.962049  485986 cri.go:89] found id: ""
	I1205 06:49:41.962063  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.962079  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:41.962084  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:41.962150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:41.991028  485986 cri.go:89] found id: ""
	I1205 06:49:41.991042  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.991049  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:41.991053  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:41.991121  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:42.020514  485986 cri.go:89] found id: ""
	I1205 06:49:42.020536  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.020544  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:42.020550  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:42.020614  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:42.047453  485986 cri.go:89] found id: ""
	I1205 06:49:42.047467  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.047474  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:42.047479  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:42.047535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:42.079004  485986 cri.go:89] found id: ""
	I1205 06:49:42.079019  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.079026  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:42.079033  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:42.079098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:42.110781  485986 cri.go:89] found id: ""
	I1205 06:49:42.110806  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.110814  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:42.110821  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:42.110832  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:42.191665  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:42.191688  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:42.241592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:42.241609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:42.314021  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:42.314041  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:42.331123  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:42.331139  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:42.401371  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:44.902557  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:44.913856  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:44.913928  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:44.944329  485986 cri.go:89] found id: ""
	I1205 06:49:44.944343  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.944350  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:44.944355  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:44.944411  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:44.972877  485986 cri.go:89] found id: ""
	I1205 06:49:44.972890  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.972897  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:44.972902  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:44.972961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:44.997771  485986 cri.go:89] found id: ""
	I1205 06:49:44.997785  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.997792  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:44.997797  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:44.997858  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:45.044196  485986 cri.go:89] found id: ""
	I1205 06:49:45.044212  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.044220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:45.044225  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:45.044296  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:45.100218  485986 cri.go:89] found id: ""
	I1205 06:49:45.100234  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.100242  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:45.100247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:45.100322  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:45.143680  485986 cri.go:89] found id: ""
	I1205 06:49:45.143696  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.143704  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:45.143710  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:45.144010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:45.184794  485986 cri.go:89] found id: ""
	I1205 06:49:45.184810  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.184818  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:45.184827  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:45.184840  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:45.266987  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:45.267020  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:45.286876  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:45.286913  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:45.370968  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:45.370979  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:45.370991  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:45.446768  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:45.446788  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:47.979096  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:47.989170  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:47.989236  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:48.018828  485986 cri.go:89] found id: ""
	I1205 06:49:48.018841  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.018849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:48.018854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:48.018915  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:48.048874  485986 cri.go:89] found id: ""
	I1205 06:49:48.048888  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.048895  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:48.048901  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:48.048960  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:48.075707  485986 cri.go:89] found id: ""
	I1205 06:49:48.075722  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.075728  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:48.075733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:48.075792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:48.100630  485986 cri.go:89] found id: ""
	I1205 06:49:48.100644  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.100651  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:48.100657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:48.100715  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:48.126176  485986 cri.go:89] found id: ""
	I1205 06:49:48.126190  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.126197  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:48.126202  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:48.126266  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:48.153143  485986 cri.go:89] found id: ""
	I1205 06:49:48.153157  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.153170  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:48.153181  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:48.153249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:48.179066  485986 cri.go:89] found id: ""
	I1205 06:49:48.179080  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.179087  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:48.179094  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:48.179104  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:48.238867  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:48.238878  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:48.238892  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:48.318473  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:48.318493  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:48.351978  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:48.352000  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:48.421167  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:48.421187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:50.939180  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:50.949233  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:50.949290  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:50.978828  485986 cri.go:89] found id: ""
	I1205 06:49:50.978842  485986 logs.go:282] 0 containers: []
	W1205 06:49:50.978849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:50.978854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:50.978910  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:51.004445  485986 cri.go:89] found id: ""
	I1205 06:49:51.004461  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.004469  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:51.004475  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:51.004545  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:51.032998  485986 cri.go:89] found id: ""
	I1205 06:49:51.033012  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.033019  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:51.033025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:51.033080  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:51.058907  485986 cri.go:89] found id: ""
	I1205 06:49:51.058921  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.058929  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:51.058934  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:51.058998  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:51.088751  485986 cri.go:89] found id: ""
	I1205 06:49:51.088765  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.088773  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:51.088778  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:51.088836  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:51.114739  485986 cri.go:89] found id: ""
	I1205 06:49:51.114753  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.114760  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:51.114766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:51.114827  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:51.146228  485986 cri.go:89] found id: ""
	I1205 06:49:51.146242  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.146249  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:51.146257  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:51.146267  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:51.213460  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:51.213479  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:51.228827  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:51.228842  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:51.295308  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:51.295318  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:51.295328  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:51.378866  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:51.378887  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:53.908370  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:53.918562  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:53.918621  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:53.944262  485986 cri.go:89] found id: ""
	I1205 06:49:53.944277  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.944284  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:53.944289  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:53.944349  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:53.969495  485986 cri.go:89] found id: ""
	I1205 06:49:53.969509  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.969516  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:53.969522  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:53.969602  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:53.996074  485986 cri.go:89] found id: ""
	I1205 06:49:53.996088  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.996095  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:53.996100  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:53.996155  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:54.023768  485986 cri.go:89] found id: ""
	I1205 06:49:54.023783  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.023790  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:54.023796  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:54.023854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:54.048370  485986 cri.go:89] found id: ""
	I1205 06:49:54.048385  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.048392  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:54.048397  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:54.048458  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:54.073241  485986 cri.go:89] found id: ""
	I1205 06:49:54.073255  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.073263  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:54.073268  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:54.073329  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:54.098794  485986 cri.go:89] found id: ""
	I1205 06:49:54.098808  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.098816  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:54.098824  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:54.098833  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:54.165835  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:54.165854  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:54.181432  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:54.181447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:54.255506  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:54.255516  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:54.255529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:54.341643  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:54.341666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:56.871077  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:56.883786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:56.883848  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:56.913242  485986 cri.go:89] found id: ""
	I1205 06:49:56.913255  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.913262  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:56.913268  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:56.913325  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:56.940834  485986 cri.go:89] found id: ""
	I1205 06:49:56.940849  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.940856  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:56.940863  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:56.940923  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:56.969612  485986 cri.go:89] found id: ""
	I1205 06:49:56.969626  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.969633  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:56.969639  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:56.969698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:56.996324  485986 cri.go:89] found id: ""
	I1205 06:49:56.996338  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.996345  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:56.996351  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:56.996412  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:57.023385  485986 cri.go:89] found id: ""
	I1205 06:49:57.023399  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.023407  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:57.023412  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:57.023470  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:57.047721  485986 cri.go:89] found id: ""
	I1205 06:49:57.047734  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.047741  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:57.047747  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:57.047803  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:57.072770  485986 cri.go:89] found id: ""
	I1205 06:49:57.072783  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.072790  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:57.072798  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:57.072807  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:57.137878  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:57.137898  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:57.153088  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:57.153110  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:57.215030  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:57.215041  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:57.215057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:57.298537  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:57.298556  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:59.836134  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:59.846404  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:59.846463  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:59.871308  485986 cri.go:89] found id: ""
	I1205 06:49:59.871322  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.871329  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:59.871333  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:59.871389  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:59.897753  485986 cri.go:89] found id: ""
	I1205 06:49:59.897767  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.897774  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:59.897779  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:59.897840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:59.922634  485986 cri.go:89] found id: ""
	I1205 06:49:59.922649  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.922655  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:59.922661  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:59.922721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:59.946450  485986 cri.go:89] found id: ""
	I1205 06:49:59.946463  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.946473  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:59.946478  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:59.946535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:59.972723  485986 cri.go:89] found id: ""
	I1205 06:49:59.972738  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.972745  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:59.972750  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:59.972809  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:00.021990  485986 cri.go:89] found id: ""
	I1205 06:50:00.022006  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.022014  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:00.022020  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:00.022097  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:00.144138  485986 cri.go:89] found id: ""
	I1205 06:50:00.144154  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.144162  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:00.144171  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:00.144184  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:00.257253  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:00.257284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:00.303408  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:00.303429  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:00.439913  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:00.439925  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:00.439937  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:00.532383  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:00.532408  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:03.067932  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:03.078353  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:03.078441  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:03.108943  485986 cri.go:89] found id: ""
	I1205 06:50:03.108957  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.108964  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:03.108969  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:03.109032  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:03.139046  485986 cri.go:89] found id: ""
	I1205 06:50:03.139060  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.139077  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:03.139082  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:03.139150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:03.166455  485986 cri.go:89] found id: ""
	I1205 06:50:03.166470  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.166479  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:03.166485  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:03.166587  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:03.195955  485986 cri.go:89] found id: ""
	I1205 06:50:03.195969  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.195976  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:03.195981  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:03.196037  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:03.221513  485986 cri.go:89] found id: ""
	I1205 06:50:03.221527  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.221539  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:03.221545  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:03.221616  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:03.250570  485986 cri.go:89] found id: ""
	I1205 06:50:03.250583  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.250589  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:03.250595  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:03.250649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:03.278449  485986 cri.go:89] found id: ""
	I1205 06:50:03.278463  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.278470  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:03.278477  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:03.278488  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:03.355784  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:03.355803  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:03.375344  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:03.375365  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:03.438665  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:03.438679  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:03.438690  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:03.518012  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:03.518040  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:06.053429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:06.064448  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:06.064511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:06.091072  485986 cri.go:89] found id: ""
	I1205 06:50:06.091087  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.091094  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:06.091100  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:06.091166  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:06.119823  485986 cri.go:89] found id: ""
	I1205 06:50:06.119837  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.119844  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:06.119849  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:06.119905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:06.148798  485986 cri.go:89] found id: ""
	I1205 06:50:06.148812  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.148819  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:06.148824  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:06.148880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:06.179319  485986 cri.go:89] found id: ""
	I1205 06:50:06.179334  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.179341  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:06.179346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:06.179402  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:06.204637  485986 cri.go:89] found id: ""
	I1205 06:50:06.204652  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.204659  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:06.204665  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:06.204727  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:06.232891  485986 cri.go:89] found id: ""
	I1205 06:50:06.232906  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.232913  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:06.232919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:06.232977  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:06.260874  485986 cri.go:89] found id: ""
	I1205 06:50:06.260888  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.260895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:06.260904  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:06.260914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:06.331930  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:06.331950  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:06.349062  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:06.349078  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:06.413245  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:06.413254  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:06.413265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:06.491562  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:06.491584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:09.021435  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:09.031990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:09.032051  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:09.057732  485986 cri.go:89] found id: ""
	I1205 06:50:09.057746  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.057753  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:09.057758  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:09.057814  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:09.085296  485986 cri.go:89] found id: ""
	I1205 06:50:09.085309  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.085316  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:09.085321  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:09.085377  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:09.113133  485986 cri.go:89] found id: ""
	I1205 06:50:09.113147  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.113154  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:09.113159  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:09.113221  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:09.139103  485986 cri.go:89] found id: ""
	I1205 06:50:09.139117  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.139125  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:09.139130  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:09.139196  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:09.171980  485986 cri.go:89] found id: ""
	I1205 06:50:09.171995  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.172005  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:09.172011  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:09.172066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:09.197034  485986 cri.go:89] found id: ""
	I1205 06:50:09.197048  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.197055  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:09.197059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:09.197122  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:09.222626  485986 cri.go:89] found id: ""
	I1205 06:50:09.222641  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.222649  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:09.222656  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:09.222667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:09.288268  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:09.288287  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:09.304011  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:09.304027  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:09.378142  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:09.378151  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:09.378162  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:09.455057  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:09.455077  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:11.984604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:11.994696  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:11.994758  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:12.021691  485986 cri.go:89] found id: ""
	I1205 06:50:12.021706  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.021713  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:12.021718  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:12.021777  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:12.049086  485986 cri.go:89] found id: ""
	I1205 06:50:12.049099  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.049106  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:12.049111  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:12.049170  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:12.077335  485986 cri.go:89] found id: ""
	I1205 06:50:12.077348  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.077355  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:12.077360  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:12.077419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:12.104976  485986 cri.go:89] found id: ""
	I1205 06:50:12.104990  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.104998  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:12.105003  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:12.105065  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:12.130275  485986 cri.go:89] found id: ""
	I1205 06:50:12.130289  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.130297  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:12.130303  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:12.130359  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:12.156777  485986 cri.go:89] found id: ""
	I1205 06:50:12.156791  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.156798  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:12.156804  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:12.156862  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:12.184468  485986 cri.go:89] found id: ""
	I1205 06:50:12.184482  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.184489  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:12.184496  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:12.184506  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:12.250190  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:12.250212  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:12.265279  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:12.265295  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:12.350637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:12.350648  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:12.350659  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:12.429523  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:12.429548  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:14.958454  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:14.970034  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:14.970110  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:14.996731  485986 cri.go:89] found id: ""
	I1205 06:50:14.996754  485986 logs.go:282] 0 containers: []
	W1205 06:50:14.996761  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:14.996767  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:14.996833  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:15.032417  485986 cri.go:89] found id: ""
	I1205 06:50:15.032440  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.032448  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:15.032454  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:15.032524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:15.060989  485986 cri.go:89] found id: ""
	I1205 06:50:15.061008  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.061016  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:15.061022  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:15.061083  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:15.088194  485986 cri.go:89] found id: ""
	I1205 06:50:15.088208  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.088215  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:15.088221  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:15.088280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:15.115923  485986 cri.go:89] found id: ""
	I1205 06:50:15.115938  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.115945  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:15.115951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:15.116010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:15.146014  485986 cri.go:89] found id: ""
	I1205 06:50:15.146028  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.146035  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:15.146041  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:15.146150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:15.173160  485986 cri.go:89] found id: ""
	I1205 06:50:15.173175  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.173191  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:15.173199  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:15.173208  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:15.245690  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:15.245700  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:15.245710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:15.325395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:15.325417  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:15.356222  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:15.356276  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:15.428176  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:15.428198  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:17.943733  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:17.954302  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:17.954363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:17.979858  485986 cri.go:89] found id: ""
	I1205 06:50:17.979872  485986 logs.go:282] 0 containers: []
	W1205 06:50:17.979879  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:17.979884  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:17.979948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:18.013482  485986 cri.go:89] found id: ""
	I1205 06:50:18.013497  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.013504  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:18.013509  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:18.013593  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:18.040079  485986 cri.go:89] found id: ""
	I1205 06:50:18.040094  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.040102  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:18.040108  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:18.040172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:18.066285  485986 cri.go:89] found id: ""
	I1205 06:50:18.066300  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.066308  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:18.066312  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:18.066369  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:18.091446  485986 cri.go:89] found id: ""
	I1205 06:50:18.091461  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.091468  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:18.091473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:18.091532  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:18.121218  485986 cri.go:89] found id: ""
	I1205 06:50:18.121234  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.121241  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:18.121247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:18.121306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:18.147004  485986 cri.go:89] found id: ""
	I1205 06:50:18.147018  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.147032  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:18.147039  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:18.147050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:18.212973  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:18.212983  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:18.212993  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:18.290491  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:18.290510  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:18.319970  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:18.319986  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:18.392419  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:18.392440  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:20.907875  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:20.918552  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:20.918615  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:20.948914  485986 cri.go:89] found id: ""
	I1205 06:50:20.948928  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.948935  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:20.948941  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:20.948999  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:20.974289  485986 cri.go:89] found id: ""
	I1205 06:50:20.974303  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.974310  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:20.974315  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:20.974371  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:20.999954  485986 cri.go:89] found id: ""
	I1205 06:50:20.999968  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.999976  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:20.999980  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:21.000038  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:21.029788  485986 cri.go:89] found id: ""
	I1205 06:50:21.029803  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.029810  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:21.029815  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:21.029875  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:21.055163  485986 cri.go:89] found id: ""
	I1205 06:50:21.055177  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.055183  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:21.055188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:21.055246  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:21.080955  485986 cri.go:89] found id: ""
	I1205 06:50:21.080969  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.080977  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:21.080982  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:21.081052  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:21.108615  485986 cri.go:89] found id: ""
	I1205 06:50:21.108629  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.108637  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:21.108644  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:21.108655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:21.173790  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:21.173811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:21.188952  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:21.188969  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:21.253459  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:21.253469  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:21.253480  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:21.337063  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:21.337084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:23.866768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:23.877363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:23.877430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:23.903790  485986 cri.go:89] found id: ""
	I1205 06:50:23.903807  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.903814  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:23.903819  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:23.903880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:23.933319  485986 cri.go:89] found id: ""
	I1205 06:50:23.933333  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.933341  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:23.933346  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:23.933403  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:23.959901  485986 cri.go:89] found id: ""
	I1205 06:50:23.959914  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.959922  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:23.959927  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:23.959987  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:23.986070  485986 cri.go:89] found id: ""
	I1205 06:50:23.986083  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.986090  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:23.986096  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:23.986154  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:24.014309  485986 cri.go:89] found id: ""
	I1205 06:50:24.014324  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.014331  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:24.014336  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:24.014422  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:24.040569  485986 cri.go:89] found id: ""
	I1205 06:50:24.040590  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.040598  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:24.040603  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:24.040663  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:24.066648  485986 cri.go:89] found id: ""
	I1205 06:50:24.066661  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.066669  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:24.066676  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:24.066687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:24.145239  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:24.145259  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:24.173133  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:24.173149  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:24.238469  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:24.238489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:24.253802  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:24.253821  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:24.341051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:26.841329  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:26.852711  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:26.852792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:26.878845  485986 cri.go:89] found id: ""
	I1205 06:50:26.878858  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.878865  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:26.878871  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:26.878926  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:26.903460  485986 cri.go:89] found id: ""
	I1205 06:50:26.903475  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.903482  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:26.903487  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:26.903543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:26.928316  485986 cri.go:89] found id: ""
	I1205 06:50:26.928330  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.928337  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:26.928342  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:26.928401  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:26.957464  485986 cri.go:89] found id: ""
	I1205 06:50:26.957477  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.957484  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:26.957490  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:26.957547  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:26.985494  485986 cri.go:89] found id: ""
	I1205 06:50:26.985508  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.985515  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:26.985520  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:26.985588  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:27.012077  485986 cri.go:89] found id: ""
	I1205 06:50:27.012092  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.012099  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:27.012105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:27.012164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:27.037759  485986 cri.go:89] found id: ""
	I1205 06:50:27.037772  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.037779  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:27.037802  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:27.037813  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:27.068005  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:27.068022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:27.132023  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:27.132042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:27.147964  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:27.147981  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:27.210077  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:27.210087  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:27.210098  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:29.784398  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:29.794460  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:29.794523  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:29.820207  485986 cri.go:89] found id: ""
	I1205 06:50:29.820221  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.820228  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:29.820235  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:29.820301  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:29.845407  485986 cri.go:89] found id: ""
	I1205 06:50:29.845421  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.845429  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:29.845434  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:29.845494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:29.871350  485986 cri.go:89] found id: ""
	I1205 06:50:29.871364  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.871371  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:29.871376  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:29.871434  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:29.896668  485986 cri.go:89] found id: ""
	I1205 06:50:29.896682  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.896689  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:29.896694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:29.896753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:29.925230  485986 cri.go:89] found id: ""
	I1205 06:50:29.925243  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.925250  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:29.925256  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:29.925320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:29.950431  485986 cri.go:89] found id: ""
	I1205 06:50:29.950445  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.950453  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:29.950459  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:29.950516  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:29.975493  485986 cri.go:89] found id: ""
	I1205 06:50:29.975507  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.975514  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:29.975522  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:29.975532  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:29.990544  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:29.990561  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:30.089331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:30.089343  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:30.089355  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:30.176998  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:30.177019  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:30.207325  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:30.207342  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:32.779616  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:32.789524  485986 kubeadm.go:602] duration metric: took 4m3.78523296s to restartPrimaryControlPlane
	W1205 06:50:32.789596  485986 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:50:32.789791  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:50:33.200382  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:50:33.213168  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:50:33.221236  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:50:33.221295  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:50:33.229165  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:50:33.229174  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:50:33.229226  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:50:33.236961  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:50:33.237026  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:50:33.244309  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:50:33.252201  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:50:33.252257  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:50:33.259677  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.267359  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:50:33.267427  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.275464  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:50:33.283208  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:50:33.283271  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:50:33.290746  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:50:33.405156  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:50:33.405615  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:50:33.478173  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:54:34.582933  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:54:34.582957  485986 kubeadm.go:319] 
	I1205 06:54:34.583076  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:54:34.588185  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:34.588247  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:34.588363  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:34.588446  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:34.588482  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:34.588527  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:34.588597  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:34.588649  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:34.588697  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:34.588744  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:34.588792  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:34.588836  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:34.588883  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:34.588934  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:34.589006  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:34.589099  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:34.589189  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:34.589249  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:34.592315  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:34.592403  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:34.592463  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:34.592535  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:34.592603  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:34.592668  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:34.592743  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:34.592810  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:34.592871  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:34.592953  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:34.593046  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:34.593088  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:34.593139  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:34.593190  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:34.593242  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:34.593294  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:34.593352  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:34.593406  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:34.593499  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:34.593561  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:34.596524  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:34.596625  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:34.596698  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:34.596789  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:34.596910  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:34.597004  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:34.597119  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:34.597212  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:34.597250  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:34.597382  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:34.597485  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:54:34.597547  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00128632s
	I1205 06:54:34.597550  485986 kubeadm.go:319] 
	I1205 06:54:34.597605  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:54:34.597636  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:54:34.597743  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:54:34.597746  485986 kubeadm.go:319] 
	I1205 06:54:34.597848  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:54:34.597879  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:54:34.597909  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 06:54:34.598022  485986 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128632s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:54:34.598117  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:54:34.598416  485986 kubeadm.go:319] 
	I1205 06:54:35.010606  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:54:35.026641  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:54:35.026696  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:54:35.034906  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:54:35.034914  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:54:35.034968  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:54:35.043100  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:54:35.043156  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:54:35.050682  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:54:35.058435  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:54:35.058491  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:54:35.066352  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.075006  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:54:35.075083  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.083161  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:54:35.091527  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:54:35.091591  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:54:35.099509  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:54:35.143144  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:35.143194  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:35.214737  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:35.214806  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:35.214841  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:35.214894  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:35.214941  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:35.214988  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:35.215036  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:35.215082  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:35.215135  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:35.215179  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:35.215227  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:35.215272  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:35.280867  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:35.280975  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:35.281065  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:35.290789  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:35.294356  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:35.294469  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:35.294532  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:35.294608  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:35.294667  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:35.294735  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:35.294788  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:35.294850  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:35.294910  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:35.294989  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:35.295060  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:35.295097  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:35.295152  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:35.600230  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:35.819372  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:36.031672  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:36.347784  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:36.515743  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:36.516403  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:36.519035  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:36.522469  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:36.522648  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:36.522737  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:36.522811  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:36.538750  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:36.538854  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:36.547809  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:36.548944  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:36.549484  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:36.685042  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:36.685156  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:58:36.684952  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233025s
	I1205 06:58:36.684982  485986 kubeadm.go:319] 
	I1205 06:58:36.685040  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:58:36.685073  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:58:36.685203  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:58:36.685213  485986 kubeadm.go:319] 
	I1205 06:58:36.685319  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:58:36.685352  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:58:36.685382  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:58:36.685386  485986 kubeadm.go:319] 
	I1205 06:58:36.690024  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:58:36.690504  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:58:36.690648  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:58:36.690898  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:58:36.690904  485986 kubeadm.go:319] 
	I1205 06:58:36.690971  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:58:36.691025  485986 kubeadm.go:403] duration metric: took 12m7.722207493s to StartCluster
	I1205 06:58:36.691058  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:58:36.691120  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:58:36.717503  485986 cri.go:89] found id: ""
	I1205 06:58:36.717522  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.717530  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:58:36.717535  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:58:36.717599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:58:36.742068  485986 cri.go:89] found id: ""
	I1205 06:58:36.742083  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.742090  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:58:36.742095  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:58:36.742150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:58:36.766426  485986 cri.go:89] found id: ""
	I1205 06:58:36.766439  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.766446  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:58:36.766452  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:58:36.766507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:58:36.791681  485986 cri.go:89] found id: ""
	I1205 06:58:36.791696  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.791703  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:58:36.791707  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:58:36.791767  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:58:36.816243  485986 cri.go:89] found id: ""
	I1205 06:58:36.816257  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.816264  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:58:36.816269  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:58:36.816323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:58:36.841386  485986 cri.go:89] found id: ""
	I1205 06:58:36.841399  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.841406  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:58:36.841411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:58:36.841467  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:58:36.866554  485986 cri.go:89] found id: ""
	I1205 06:58:36.866568  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.866575  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:58:36.866584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:58:36.866594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:58:36.900565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:58:36.900582  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:58:36.968215  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:58:36.968234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:58:36.983291  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:58:36.983307  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:58:37.054622  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:58:37.054632  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:58:37.054644  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1205 06:58:37.134983  485986 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:58:37.135045  485986 out.go:285] * 
	W1205 06:58:37.135160  485986 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.135223  485986 out.go:285] * 
	W1205 06:58:37.137432  485986 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:58:37.142766  485986 out.go:203] 
	W1205 06:58:37.146311  485986 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.146363  485986 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:58:37.146497  485986 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:58:37.150301  485986 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571372953Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571407981Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571472524Z" level=info msg="Create NRI interface"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571571528Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571581186Z" level=info msg="runtime interface created"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571594224Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571600657Z" level=info msg="runtime interface starting up..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571606548Z" level=info msg="starting plugins..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571619709Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571689602Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:46:27 functional-787602 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.481366601Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1e822775-5cef-40d3-9686-eee6d086f1b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482224852Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e1ef1844-3877-40e0-84c2-d1c873b40d24 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482740149Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=56b0a0d4-9f66-4348-9e04-1e53dd2684db name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483228025Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=beb5cc41-ecba-44e2-8431-8eb7caf9e6f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483764967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d6fbbe20-116f-42f6-8365-a643bfd6a022 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484325426Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cc465c27-997c-4720-add0-d2aaefef1742 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484777542Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5847487f-12af-4b83-83de-0b1cf4bc7dd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.284218578Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=9fcc6ad9-fc72-42e2-9eb3-af609b8c0fda name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285002572Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0a9c3300-2647-489a-a8c7-299acd2c2ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285494328Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ef012813-7294-42de-84e3-c56b0aecceed name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285987553Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=97a76923-ddd0-413b-afdb-1a86b6e1781b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.286464253Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=72332317-e652-4a97-9d17-3ba7818fe38f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.28695984Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=443ec697-27e1-4420-9454-8afdb0ee65b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.287383469Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7f6d73c6-60bf-4743-9f1c-60ae6c282918 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:00:48.284969   23903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:48.285636   23903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:48.287165   23903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:48.287475   23903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:48.288958   23903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:00:48 up  3:42,  0 user,  load average: 0.56, 0.30, 0.36
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:00:45 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:46 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 05 07:00:46 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:46 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:46 functional-787602 kubelet[23751]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:46 functional-787602 kubelet[23751]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:46 functional-787602 kubelet[23751]: E1205 07:00:46.323823   23751 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:46 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:46 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:46 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 05 07:00:46 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:46 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:47 functional-787602 kubelet[23797]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:47 functional-787602 kubelet[23797]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:47 functional-787602 kubelet[23797]: E1205 07:00:47.043228   23797 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:47 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:47 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:47 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 05 07:00:47 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:47 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:47 functional-787602 kubelet[23820]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:47 functional-787602 kubelet[23820]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:47 functional-787602 kubelet[23820]: E1205 07:00:47.834635   23820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:47 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:47 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (331.855188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-787602 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-787602 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (51.390696ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-787602 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-787602 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-787602 describe po hello-node-connect: exit status 1 (62.8149ms)

                                                
                                                
** stderr ** 
	E1205 07:00:33.384536  499899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.385991  499899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.387414  499899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.388805  499899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.390254  499899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-787602 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-787602 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-787602 logs -l app=hello-node-connect: exit status 1 (61.758493ms)

                                                
                                                
** stderr ** 
	E1205 07:00:33.447784  499903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.449300  499903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.450712  499903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.452120  499903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-787602 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-787602 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-787602 describe svc hello-node-connect: exit status 1 (70.591968ms)

                                                
                                                
** stderr ** 
	E1205 07:00:33.512090  499908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.513673  499908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.515042  499908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.516427  499908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.517800  499908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-787602 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (317.246633ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-787602 cache delete minikube-local-cache-test:functional-787602                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl images                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ cache   │ functional-787602 cache reload                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ ssh     │ functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │ 05 Dec 25 06:46 UTC │
	│ kubectl │ functional-787602 kubectl -- --context functional-787602 get pods                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ start   │ -p functional-787602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:46 UTC │                     │
	│ ssh     │ functional-787602 ssh echo hello                                                                         │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ config  │ functional-787602 config unset cpus                                                                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ config  │ functional-787602 config get cpus                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ config  │ functional-787602 config set cpus 2                                                                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ config  │ functional-787602 config get cpus                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ config  │ functional-787602 config unset cpus                                                                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ ssh     │ functional-787602 ssh cat /etc/hostname                                                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │ 05 Dec 25 06:58 UTC │
	│ config  │ functional-787602 config get cpus                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel  │ functional-787602 tunnel --alsologtostderr                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel  │ functional-787602 tunnel --alsologtostderr                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ tunnel  │ functional-787602 tunnel --alsologtostderr                                                               │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 06:58 UTC │                     │
	│ addons  │ functional-787602 addons list                                                                            │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ addons  │ functional-787602 addons list -o json                                                                    │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:46:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:46:23.060483  485986 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:46:23.060587  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060592  485986 out.go:374] Setting ErrFile to fd 2...
	I1205 06:46:23.060596  485986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:46:23.060943  485986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:46:23.061383  485986 out.go:368] Setting JSON to false
	I1205 06:46:23.062251  485986 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12510,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:46:23.062334  485986 start.go:143] virtualization:  
	I1205 06:46:23.066082  485986 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:46:23.069981  485986 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:46:23.070104  485986 notify.go:221] Checking for updates...
	I1205 06:46:23.076003  485986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:46:23.078837  485986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:46:23.081722  485986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:46:23.084680  485986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:46:23.087568  485986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:46:23.090922  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:23.091022  485986 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:46:23.121487  485986 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:46:23.121590  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.189036  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.180099644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.189132  485986 docker.go:319] overlay module found
	I1205 06:46:23.192176  485986 out.go:179] * Using the docker driver based on existing profile
	I1205 06:46:23.195026  485986 start.go:309] selected driver: docker
	I1205 06:46:23.195034  485986 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.195143  485986 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:46:23.195245  485986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:46:23.259735  485986 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-05 06:46:23.25087077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:46:23.260168  485986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 06:46:23.260193  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:23.260245  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:23.260292  485986 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:23.263405  485986 out.go:179] * Starting "functional-787602" primary control-plane node in "functional-787602" cluster
	I1205 06:46:23.266278  485986 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:46:23.269305  485986 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:46:23.272128  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:23.272198  485986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:46:23.291679  485986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:46:23.291691  485986 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 06:46:23.331907  485986 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 06:46:24.681828  485986 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 06:46:24.681963  485986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/config.json ...
	I1205 06:46:24.682057  485986 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682183  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 06:46:24.682191  485986 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.111µs
	I1205 06:46:24.682203  485986 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 06:46:24.682212  485986 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682238  485986 cache.go:243] Successfully downloaded all kic artifacts
	I1205 06:46:24.682242  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 06:46:24.682246  485986 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 35.143µs
	I1205 06:46:24.682251  485986 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682266  485986 start.go:360] acquireMachinesLock for functional-787602: {Name:mk2cef91e069ce153bded9238a833f1f3c564d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682260  485986 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682305  485986 start.go:364] duration metric: took 27.331µs to acquireMachinesLock for "functional-787602"
	I1205 06:46:24.682303  485986 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682317  485986 start.go:96] Skipping create...Using existing machine configuration
	I1205 06:46:24.682322  485986 fix.go:54] fixHost starting: 
	I1205 06:46:24.682343  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 06:46:24.682348  485986 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.295µs
	I1205 06:46:24.682354  485986 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 06:46:24.682364  485986 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682416  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 06:46:24.682421  485986 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 57.691µs
	I1205 06:46:24.682428  485986 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682437  485986 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682462  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 06:46:24.682466  485986 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.31µs
	I1205 06:46:24.682471  485986 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682480  485986 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682514  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 06:46:24.682518  485986 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.27µs
	I1205 06:46:24.682523  485986 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 06:46:24.682531  485986 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 06:46:24.682555  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 06:46:24.682558  485986 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.283µs
	I1205 06:46:24.682568  485986 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 06:46:24.682583  485986 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 06:46:24.682587  485986 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 328.529µs
	I1205 06:46:24.682591  485986 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 06:46:24.682599  485986 cache.go:87] Successfully saved all images to host disk.
	I1205 06:46:24.682614  485986 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 06:46:24.699421  485986 fix.go:112] recreateIfNeeded on functional-787602: state=Running err=<nil>
	W1205 06:46:24.699440  485986 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 06:46:24.704636  485986 out.go:252] * Updating the running docker "functional-787602" container ...
	I1205 06:46:24.704669  485986 machine.go:94] provisionDockerMachine start ...
	I1205 06:46:24.704752  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.722297  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.722651  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.722658  485986 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 06:46:24.869775  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:24.869801  485986 ubuntu.go:182] provisioning hostname "functional-787602"
	I1205 06:46:24.869864  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:24.887234  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:24.887558  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:24.887567  485986 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-787602 && echo "functional-787602" | sudo tee /etc/hostname
	I1205 06:46:25.047727  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-787602
	
	I1205 06:46:25.047810  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.066336  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.066675  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.066689  485986 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 06:46:25.218719  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 06:46:25.218735  485986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 06:46:25.218754  485986 ubuntu.go:190] setting up certificates
	I1205 06:46:25.218762  485986 provision.go:84] configureAuth start
	I1205 06:46:25.218833  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:25.236317  485986 provision.go:143] copyHostCerts
	I1205 06:46:25.236383  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 06:46:25.236396  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 06:46:25.236468  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 06:46:25.236562  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 06:46:25.236565  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 06:46:25.236589  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 06:46:25.236636  485986 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 06:46:25.236640  485986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 06:46:25.236661  485986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 06:46:25.236704  485986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.functional-787602 san=[127.0.0.1 192.168.49.2 functional-787602 localhost minikube]
	I1205 06:46:25.509369  485986 provision.go:177] copyRemoteCerts
	I1205 06:46:25.509433  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 06:46:25.509483  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.526532  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:25.630074  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 06:46:25.647569  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 06:46:25.665563  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 06:46:25.683160  485986 provision.go:87] duration metric: took 464.374115ms to configureAuth
	I1205 06:46:25.683179  485986 ubuntu.go:206] setting minikube options for container-runtime
	I1205 06:46:25.683380  485986 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 06:46:25.683487  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:25.701466  485986 main.go:143] libmachine: Using SSH client type: native
	I1205 06:46:25.701775  485986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1205 06:46:25.701787  485986 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 06:46:26.045147  485986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 06:46:26.045161  485986 machine.go:97] duration metric: took 1.340485738s to provisionDockerMachine
	I1205 06:46:26.045171  485986 start.go:293] postStartSetup for "functional-787602" (driver="docker")
	I1205 06:46:26.045182  485986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 06:46:26.045240  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 06:46:26.045301  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.071462  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.178226  485986 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 06:46:26.181599  485986 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 06:46:26.181617  485986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 06:46:26.181627  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 06:46:26.181684  485986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 06:46:26.181759  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 06:46:26.181833  485986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts -> hosts in /etc/test/nested/copy/444147
	I1205 06:46:26.181875  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/444147
	I1205 06:46:26.189500  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:26.206597  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts --> /etc/test/nested/copy/444147/hosts (40 bytes)
	I1205 06:46:26.223486  485986 start.go:296] duration metric: took 178.3022ms for postStartSetup
	I1205 06:46:26.223577  485986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 06:46:26.223614  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.239842  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.339498  485986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 06:46:26.344313  485986 fix.go:56] duration metric: took 1.66198384s for fixHost
	I1205 06:46:26.344329  485986 start.go:83] releasing machines lock for "functional-787602", held for 1.662017843s
	I1205 06:46:26.344396  485986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787602
	I1205 06:46:26.361695  485986 ssh_runner.go:195] Run: cat /version.json
	I1205 06:46:26.361744  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.361773  485986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 06:46:26.361823  485986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 06:46:26.380556  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.389997  485986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 06:46:26.566296  485986 ssh_runner.go:195] Run: systemctl --version
	I1205 06:46:26.572676  485986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 06:46:26.609041  485986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 06:46:26.613450  485986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 06:46:26.613514  485986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 06:46:26.621451  485986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 06:46:26.621466  485986 start.go:496] detecting cgroup driver to use...
	I1205 06:46:26.621496  485986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 06:46:26.621543  485986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 06:46:26.637300  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 06:46:26.650753  485986 docker.go:218] disabling cri-docker service (if available) ...
	I1205 06:46:26.650821  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 06:46:26.666902  485986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 06:46:26.680209  485986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 06:46:26.795240  485986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 06:46:26.925661  485986 docker.go:234] disabling docker service ...
	I1205 06:46:26.925721  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 06:46:26.941529  485986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 06:46:26.954708  485986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 06:46:27.063545  485986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 06:46:27.175808  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 06:46:27.188517  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 06:46:27.203590  485986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 06:46:27.203644  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.212003  485986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 06:46:27.212066  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.220691  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.229907  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.238922  485986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 06:46:27.247339  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.256340  485986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.264720  485986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 06:46:27.273692  485986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 06:46:27.281324  485986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 06:46:27.288509  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.394627  485986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 06:46:27.581943  485986 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 06:46:27.582023  485986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 06:46:27.586836  485986 start.go:564] Will wait 60s for crictl version
	I1205 06:46:27.586892  485986 ssh_runner.go:195] Run: which crictl
	I1205 06:46:27.591027  485986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 06:46:27.618052  485986 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 06:46:27.618154  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.654922  485986 ssh_runner.go:195] Run: crio --version
	I1205 06:46:27.689535  485986 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 06:46:27.692450  485986 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 06:46:27.709456  485986 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 06:46:27.716890  485986 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1205 06:46:27.719774  485986 kubeadm.go:884] updating cluster {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 06:46:27.719904  485986 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 06:46:27.719957  485986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 06:46:27.756745  485986 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 06:46:27.756757  485986 cache_images.go:86] Images are preloaded, skipping loading
	I1205 06:46:27.756762  485986 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1205 06:46:27.756860  485986 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 06:46:27.756933  485986 ssh_runner.go:195] Run: crio config
	I1205 06:46:27.826615  485986 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1205 06:46:27.826635  485986 cni.go:84] Creating CNI manager for ""
	I1205 06:46:27.826644  485986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:46:27.826657  485986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 06:46:27.826679  485986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787602 NodeName:functional-787602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 06:46:27.826795  485986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 06:46:27.826871  485986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 06:46:27.834649  485986 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 06:46:27.834712  485986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 06:46:27.842099  485986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 06:46:27.855421  485986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 06:46:27.868701  485986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1205 06:46:27.882058  485986 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 06:46:27.885936  485986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 06:46:27.995572  485986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 06:46:28.275034  485986 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602 for IP: 192.168.49.2
	I1205 06:46:28.275045  485986 certs.go:195] generating shared ca certs ...
	I1205 06:46:28.275061  485986 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:46:28.275249  485986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 06:46:28.275292  485986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 06:46:28.275298  485986 certs.go:257] generating profile certs ...
	I1205 06:46:28.275410  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.key
	I1205 06:46:28.275475  485986 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key.16d29bb2
	I1205 06:46:28.275515  485986 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key
	I1205 06:46:28.275644  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 06:46:28.275677  485986 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 06:46:28.275685  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 06:46:28.275720  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 06:46:28.275747  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 06:46:28.275784  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 06:46:28.275832  485986 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 06:46:28.276503  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 06:46:28.298544  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 06:46:28.319289  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 06:46:28.339576  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 06:46:28.358300  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 06:46:28.376540  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 06:46:28.394872  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 06:46:28.412281  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 06:46:28.429993  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 06:46:28.447492  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 06:46:28.464800  485986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 06:46:28.482269  485986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 06:46:28.494984  485986 ssh_runner.go:195] Run: openssl version
	I1205 06:46:28.501339  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.508762  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 06:46:28.516382  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520092  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.520163  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 06:46:28.563665  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 06:46:28.571080  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.578338  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 06:46:28.585799  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589656  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.589716  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 06:46:28.631223  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 06:46:28.638732  485986 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.646106  485986 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 06:46:28.653539  485986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657103  485986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.657161  485986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 06:46:28.698123  485986 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 06:46:28.706515  485986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 06:46:28.710605  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 06:46:28.754183  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 06:46:28.798105  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 06:46:28.841637  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 06:46:28.883652  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 06:46:28.926486  485986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 06:46:28.968827  485986 kubeadm.go:401] StartCluster: {Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:46:28.968900  485986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 06:46:28.968973  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:28.995506  485986 cri.go:89] found id: ""
	I1205 06:46:28.995567  485986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 06:46:29.004262  485986 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 06:46:29.004281  485986 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 06:46:29.004345  485986 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 06:46:29.012409  485986 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.012971  485986 kubeconfig.go:125] found "functional-787602" server: "https://192.168.49.2:8441"
	I1205 06:46:29.014556  485986 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 06:46:29.022548  485986 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 06:31:50.409182079 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 06:46:27.876278809 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1205 06:46:29.022570  485986 kubeadm.go:1161] stopping kube-system containers ...
	I1205 06:46:29.022584  485986 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 06:46:29.022652  485986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 06:46:29.056958  485986 cri.go:89] found id: ""
	I1205 06:46:29.057019  485986 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 06:46:29.073934  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:46:29.081656  485986 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5623 Dec  5 06:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  5 06:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  5 06:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  5 06:35 /etc/kubernetes/scheduler.conf
	
	I1205 06:46:29.081722  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:46:29.089572  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:46:29.097486  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.097543  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:46:29.105088  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.112583  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.112639  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:46:29.120188  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:46:29.127909  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 06:46:29.127966  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:46:29.135508  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:46:29.143544  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:29.190973  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.485506  485986 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.294504309s)
	I1205 06:46:30.485577  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.689694  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.752398  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 06:46:30.798299  485986 api_server.go:52] waiting for apiserver process to appear ...
	I1205 06:46:30.798367  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.299303  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:31.799420  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.299360  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:32.798577  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.298564  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:33.799310  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.298783  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:34.799510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.299369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:35.799119  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.298663  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:36.798517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.299207  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:37.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.298684  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:38.798475  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.299188  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:39.799197  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.299101  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:40.798572  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:41.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.298546  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:42.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.298563  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:43.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:44.799429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.299246  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:45.799313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.298849  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:46.799336  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.298524  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:47.798566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.298926  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:48.798523  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.298502  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:49.799392  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.298514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:50.799156  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.299002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:51.798510  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.298587  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:52.798531  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.298834  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:53.798937  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.298568  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:54.798738  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.298745  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:55.799302  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.298517  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:56.799058  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.299228  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:57.798518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.298540  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:58.799439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.298489  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:46:59.798827  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.298721  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:00.799210  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.298539  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:01.798525  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.298844  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:02.799320  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.298437  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:03.799300  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.299120  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:04.799319  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.298499  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:05.799357  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.298718  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:06.799264  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.299497  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:07.799177  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.298596  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:08.798469  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.298441  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:09.798552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.299123  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:10.798514  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.299549  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:11.799361  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.298530  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:12.798490  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.299082  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:13.798506  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.298576  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:14.799316  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.298516  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:15.798581  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.298604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:16.799331  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.298518  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:17.799198  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.298513  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:18.799043  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.298601  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:19.798562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.298562  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:20.798978  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.298537  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:21.798570  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.298807  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:22.799307  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.298910  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:23.798961  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.299359  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:24.799509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.299086  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:25.798511  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.298495  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:26.799378  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.298528  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:27.799258  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.298589  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:28.799234  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.299117  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:29.798575  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.299185  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:30.799188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:30.799265  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:30.824550  485986 cri.go:89] found id: ""
	I1205 06:47:30.824564  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.824571  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:30.824577  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:30.824640  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:30.851389  485986 cri.go:89] found id: ""
	I1205 06:47:30.851404  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.851412  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:30.851416  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:30.851473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:30.877392  485986 cri.go:89] found id: ""
	I1205 06:47:30.877406  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.877421  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:30.877425  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:30.877481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:30.902294  485986 cri.go:89] found id: ""
	I1205 06:47:30.902308  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.902315  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:30.902321  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:30.902431  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:30.938796  485986 cri.go:89] found id: ""
	I1205 06:47:30.938810  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.938818  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:30.938823  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:30.938888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:30.965100  485986 cri.go:89] found id: ""
	I1205 06:47:30.965114  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.965121  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:30.965127  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:30.965183  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:30.992646  485986 cri.go:89] found id: ""
	I1205 06:47:30.992661  485986 logs.go:282] 0 containers: []
	W1205 06:47:30.992668  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:30.992676  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:30.992686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:31.063641  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:31.063661  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:31.081045  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:31.081060  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:31.156684  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:31.147335   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.148203   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150028   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.150887   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:31.152774   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:31.156698  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:31.156710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:31.237470  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:31.237495  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:33.770808  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:33.780812  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:33.780872  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:33.805688  485986 cri.go:89] found id: ""
	I1205 06:47:33.805701  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.805714  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:33.805719  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:33.805779  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:33.832478  485986 cri.go:89] found id: ""
	I1205 06:47:33.832492  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.832499  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:33.832504  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:33.832560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:33.857669  485986 cri.go:89] found id: ""
	I1205 06:47:33.857683  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.857690  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:33.857695  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:33.857750  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:33.883403  485986 cri.go:89] found id: ""
	I1205 06:47:33.883417  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.883426  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:33.883431  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:33.883490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:33.914197  485986 cri.go:89] found id: ""
	I1205 06:47:33.914212  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.914219  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:33.914224  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:33.914295  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:33.944924  485986 cri.go:89] found id: ""
	I1205 06:47:33.944938  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.944945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:33.944950  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:33.945007  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:33.973129  485986 cri.go:89] found id: ""
	I1205 06:47:33.973143  485986 logs.go:282] 0 containers: []
	W1205 06:47:33.973151  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:33.973158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:33.973169  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:34.044761  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:34.044781  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:34.061807  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:34.061823  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:34.130826  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:34.123392   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.123937   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.125704   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.126261   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:34.127281   11761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:34.130840  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:34.130851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:34.209603  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:34.209627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:36.743254  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:36.753733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:36.753810  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:36.779655  485986 cri.go:89] found id: ""
	I1205 06:47:36.779669  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.779676  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:36.779681  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:36.779738  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:36.805062  485986 cri.go:89] found id: ""
	I1205 06:47:36.805076  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.805083  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:36.805089  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:36.805152  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:36.830864  485986 cri.go:89] found id: ""
	I1205 06:47:36.830878  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.830886  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:36.830891  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:36.830961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:36.855729  485986 cri.go:89] found id: ""
	I1205 06:47:36.855749  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.855757  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:36.855762  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:36.855819  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:36.881068  485986 cri.go:89] found id: ""
	I1205 06:47:36.881082  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.881089  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:36.881094  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:36.881157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:36.909354  485986 cri.go:89] found id: ""
	I1205 06:47:36.909367  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.909374  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:36.909380  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:36.909450  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:36.939352  485986 cri.go:89] found id: ""
	I1205 06:47:36.939375  485986 logs.go:282] 0 containers: []
	W1205 06:47:36.939388  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:36.939396  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:36.939407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:36.954937  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:36.954953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:37.027384  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:37.014899   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.015674   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.017332   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.018005   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:37.020132   11860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:37.027396  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:37.027407  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:37.108980  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:37.109004  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:37.137603  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:37.137620  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:39.704971  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:39.715073  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:39.715153  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:39.742797  485986 cri.go:89] found id: ""
	I1205 06:47:39.742811  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.742818  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:39.742823  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:39.742882  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:39.767795  485986 cri.go:89] found id: ""
	I1205 06:47:39.767809  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.767816  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:39.767821  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:39.767888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:39.793002  485986 cri.go:89] found id: ""
	I1205 06:47:39.793016  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.793023  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:39.793028  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:39.793108  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:39.819015  485986 cri.go:89] found id: ""
	I1205 06:47:39.819029  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.819036  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:39.819042  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:39.819098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:39.844387  485986 cri.go:89] found id: ""
	I1205 06:47:39.844401  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.844408  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:39.844413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:39.844487  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:39.871624  485986 cri.go:89] found id: ""
	I1205 06:47:39.871638  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.871644  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:39.871650  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:39.871721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:39.897731  485986 cri.go:89] found id: ""
	I1205 06:47:39.897746  485986 logs.go:282] 0 containers: []
	W1205 06:47:39.897754  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:39.897761  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:39.897771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:39.962937  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:39.955722   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.956200   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957514   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.957911   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:39.959459   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:39.962949  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:39.962960  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:40.058236  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:40.058256  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:40.094003  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:40.094022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:40.167448  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:40.167468  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.685167  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:42.695150  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:42.695206  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:42.719880  485986 cri.go:89] found id: ""
	I1205 06:47:42.719893  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.719901  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:42.719906  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:42.719965  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:42.748922  485986 cri.go:89] found id: ""
	I1205 06:47:42.748936  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.748943  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:42.748949  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:42.749005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:42.778525  485986 cri.go:89] found id: ""
	I1205 06:47:42.778539  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.778546  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:42.778551  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:42.778610  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:42.804447  485986 cri.go:89] found id: ""
	I1205 06:47:42.804461  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.804468  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:42.804473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:42.804530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:42.829834  485986 cri.go:89] found id: ""
	I1205 06:47:42.829848  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.829855  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:42.829861  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:42.829917  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:42.861917  485986 cri.go:89] found id: ""
	I1205 06:47:42.861937  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.861945  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:42.861951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:42.862011  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:42.889024  485986 cri.go:89] found id: ""
	I1205 06:47:42.889047  485986 logs.go:282] 0 containers: []
	W1205 06:47:42.889055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:42.889063  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:42.889073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:42.954442  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:42.954462  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:42.969793  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:42.969810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:43.044093  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:43.035341   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.036249   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.037845   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.039225   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:43.040004   12081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:43.044113  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:43.044124  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:43.137811  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:43.137841  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.667791  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:45.677638  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:45.677697  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:45.702199  485986 cri.go:89] found id: ""
	I1205 06:47:45.702213  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.702220  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:45.702226  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:45.702284  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:45.726622  485986 cri.go:89] found id: ""
	I1205 06:47:45.726635  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.726642  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:45.726647  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:45.726703  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:45.752464  485986 cri.go:89] found id: ""
	I1205 06:47:45.752477  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.752484  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:45.752489  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:45.752551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:45.777756  485986 cri.go:89] found id: ""
	I1205 06:47:45.777770  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.777777  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:45.777783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:45.777838  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:45.803428  485986 cri.go:89] found id: ""
	I1205 06:47:45.803443  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.803459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:45.803464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:45.803524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:45.829175  485986 cri.go:89] found id: ""
	I1205 06:47:45.829189  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.829196  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:45.829201  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:45.829260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:45.855195  485986 cri.go:89] found id: ""
	I1205 06:47:45.855210  485986 logs.go:282] 0 containers: []
	W1205 06:47:45.855217  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:45.855224  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:45.855235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:45.887261  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:45.887277  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:45.952635  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:45.952655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:45.968248  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:45.968265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:46.039946  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:46.029945   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.031374   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.032091   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.033908   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:46.034613   12198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:46.039964  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:46.039975  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:48.631039  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:48.641171  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:48.641231  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:48.666360  485986 cri.go:89] found id: ""
	I1205 06:47:48.666402  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.666409  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:48.666417  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:48.666473  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:48.694222  485986 cri.go:89] found id: ""
	I1205 06:47:48.694237  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.694243  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:48.694249  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:48.694304  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:48.718984  485986 cri.go:89] found id: ""
	I1205 06:47:48.718998  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.719005  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:48.719010  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:48.719067  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:48.744169  485986 cri.go:89] found id: ""
	I1205 06:47:48.744183  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.744190  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:48.744195  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:48.744253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:48.769240  485986 cri.go:89] found id: ""
	I1205 06:47:48.769263  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.769270  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:48.769275  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:48.769341  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:48.798956  485986 cri.go:89] found id: ""
	I1205 06:47:48.798971  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.798978  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:48.798983  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:48.799044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:48.826195  485986 cri.go:89] found id: ""
	I1205 06:47:48.826209  485986 logs.go:282] 0 containers: []
	W1205 06:47:48.826216  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:48.826223  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:48.826233  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:48.892751  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:48.892771  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:48.908154  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:48.908171  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:48.975550  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:48.967655   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.968321   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.969895   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.970429   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:48.972143   12289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:48.975561  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:48.975572  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:49.057631  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:49.057651  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.594813  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:51.606364  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:51.606436  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:51.636377  485986 cri.go:89] found id: ""
	I1205 06:47:51.636391  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.636398  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:51.636403  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:51.636464  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:51.662318  485986 cri.go:89] found id: ""
	I1205 06:47:51.662332  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.662338  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:51.662349  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:51.662430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:51.688886  485986 cri.go:89] found id: ""
	I1205 06:47:51.688900  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.688907  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:51.688911  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:51.688969  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:51.717982  485986 cri.go:89] found id: ""
	I1205 06:47:51.717996  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.718003  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:51.718008  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:51.718066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:51.744748  485986 cri.go:89] found id: ""
	I1205 06:47:51.744762  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.744769  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:51.744783  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:51.744840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:51.769889  485986 cri.go:89] found id: ""
	I1205 06:47:51.769903  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.769909  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:51.769915  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:51.769970  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:51.797012  485986 cri.go:89] found id: ""
	I1205 06:47:51.797026  485986 logs.go:282] 0 containers: []
	W1205 06:47:51.797033  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:51.797040  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:51.797050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:51.871624  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:51.871643  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:51.901592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:51.901609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:51.968311  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:51.968333  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:51.983733  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:51.983748  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:52.057625  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:52.048335   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.049167   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.050935   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.051486   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:52.053878   12409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.557903  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:54.568103  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:54.568164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:54.597086  485986 cri.go:89] found id: ""
	I1205 06:47:54.597100  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.597107  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:54.597112  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:54.597168  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:54.622728  485986 cri.go:89] found id: ""
	I1205 06:47:54.622743  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.622750  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:54.622756  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:54.622812  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:54.646642  485986 cri.go:89] found id: ""
	I1205 06:47:54.646656  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.646663  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:54.646668  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:54.646723  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:54.671271  485986 cri.go:89] found id: ""
	I1205 06:47:54.671286  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.671293  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:54.671299  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:54.671355  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:54.696124  485986 cri.go:89] found id: ""
	I1205 06:47:54.696138  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.696150  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:54.696155  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:54.696210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:54.720362  485986 cri.go:89] found id: ""
	I1205 06:47:54.720375  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.720383  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:54.720388  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:54.720442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:54.754080  485986 cri.go:89] found id: ""
	I1205 06:47:54.754094  485986 logs.go:282] 0 containers: []
	W1205 06:47:54.754101  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:54.754108  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:54.754121  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:54.820260  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:54.820281  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:54.836201  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:54.836217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:54.909051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:54.900823   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.901529   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903370   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.903888   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:54.905505   12498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:54.909069  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:54.909080  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:54.984892  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:54.984912  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:47:57.516912  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:47:57.527633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:47:57.527698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:47:57.553823  485986 cri.go:89] found id: ""
	I1205 06:47:57.553837  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.553844  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:47:57.553851  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:47:57.553924  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:47:57.581054  485986 cri.go:89] found id: ""
	I1205 06:47:57.581068  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.581075  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:47:57.581080  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:47:57.581139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:47:57.606438  485986 cri.go:89] found id: ""
	I1205 06:47:57.606452  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.606460  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:47:57.606465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:47:57.606522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:47:57.632199  485986 cri.go:89] found id: ""
	I1205 06:47:57.632214  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.632220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:47:57.632226  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:47:57.632285  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:47:57.661439  485986 cri.go:89] found id: ""
	I1205 06:47:57.661454  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.661460  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:47:57.661465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:47:57.661521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:47:57.690916  485986 cri.go:89] found id: ""
	I1205 06:47:57.690930  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.690937  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:47:57.690943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:47:57.691003  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:47:57.716612  485986 cri.go:89] found id: ""
	I1205 06:47:57.716625  485986 logs.go:282] 0 containers: []
	W1205 06:47:57.716632  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:47:57.716640  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:47:57.716650  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:47:57.787213  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:47:57.787235  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:47:57.802362  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:47:57.802400  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:47:57.864350  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:47:57.856663   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.857331   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.858792   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.859379   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:47:57.860927   12606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:47:57.864360  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:47:57.864370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:47:57.941328  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:47:57.941349  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:00.470137  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:00.483635  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:00.483706  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:00.512315  485986 cri.go:89] found id: ""
	I1205 06:48:00.512330  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.512338  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:00.512345  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:00.512409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:00.546442  485986 cri.go:89] found id: ""
	I1205 06:48:00.546457  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.546464  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:00.546469  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:00.546530  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:00.573096  485986 cri.go:89] found id: ""
	I1205 06:48:00.573110  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.573123  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:00.573128  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:00.573187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:00.603254  485986 cri.go:89] found id: ""
	I1205 06:48:00.603268  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.603275  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:00.603280  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:00.603337  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:00.633558  485986 cri.go:89] found id: ""
	I1205 06:48:00.633572  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.633579  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:00.633586  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:00.633651  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:00.660790  485986 cri.go:89] found id: ""
	I1205 06:48:00.660804  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.660810  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:00.660816  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:00.660874  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:00.688773  485986 cri.go:89] found id: ""
	I1205 06:48:00.688786  485986 logs.go:282] 0 containers: []
	W1205 06:48:00.688793  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:00.688800  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:00.688811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:00.753427  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:00.753450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:00.768529  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:00.768545  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:00.832028  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:00.823845   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.824604   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826256   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.826855   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:00.828522   12709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:00.832038  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:00.832048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:00.909664  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:00.909686  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:03.440768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:03.451151  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:03.451210  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:03.475560  485986 cri.go:89] found id: ""
	I1205 06:48:03.475574  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.475580  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:03.475586  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:03.475657  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:03.500266  485986 cri.go:89] found id: ""
	I1205 06:48:03.500280  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.500286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:03.500291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:03.500350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:03.528908  485986 cri.go:89] found id: ""
	I1205 06:48:03.528921  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.528928  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:03.528933  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:03.528993  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:03.556882  485986 cri.go:89] found id: ""
	I1205 06:48:03.556896  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.556903  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:03.556908  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:03.556963  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:03.582231  485986 cri.go:89] found id: ""
	I1205 06:48:03.582244  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.582252  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:03.582257  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:03.582315  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:03.611645  485986 cri.go:89] found id: ""
	I1205 06:48:03.611658  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.611665  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:03.611670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:03.611732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:03.637034  485986 cri.go:89] found id: ""
	I1205 06:48:03.637048  485986 logs.go:282] 0 containers: []
	W1205 06:48:03.637055  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:03.637062  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:03.637072  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:03.703283  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:03.703305  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:03.718166  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:03.718182  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:03.784612  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:03.776937   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.777755   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779272   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.779806   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:03.781286   12814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:03.784623  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:03.784645  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:03.865840  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:03.865871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.395611  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:06.406190  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:06.406253  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:06.431958  485986 cri.go:89] found id: ""
	I1205 06:48:06.431972  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.431979  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:06.431984  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:06.432047  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:06.457302  485986 cri.go:89] found id: ""
	I1205 06:48:06.457317  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.457324  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:06.457329  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:06.457391  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:06.482778  485986 cri.go:89] found id: ""
	I1205 06:48:06.482793  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.482799  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:06.482805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:06.482860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:06.508293  485986 cri.go:89] found id: ""
	I1205 06:48:06.508307  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.508314  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:06.508319  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:06.508457  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:06.537089  485986 cri.go:89] found id: ""
	I1205 06:48:06.537103  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.537110  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:06.537115  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:06.537175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:06.564731  485986 cri.go:89] found id: ""
	I1205 06:48:06.564745  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.564752  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:06.564759  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:06.564815  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:06.590872  485986 cri.go:89] found id: ""
	I1205 06:48:06.590887  485986 logs.go:282] 0 containers: []
	W1205 06:48:06.590895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:06.590903  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:06.590914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:06.658481  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:06.650418   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.651217   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.652805   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.653354   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:06.654995   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:06.658495  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:06.658505  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:06.733300  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:06.733322  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:06.768591  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:06.768606  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:06.834509  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:06.834529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.350677  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:09.360723  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:09.360783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:09.388219  485986 cri.go:89] found id: ""
	I1205 06:48:09.388232  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.388239  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:09.388244  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:09.388306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:09.416992  485986 cri.go:89] found id: ""
	I1205 06:48:09.417007  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.417013  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:09.417019  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:09.417076  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:09.446304  485986 cri.go:89] found id: ""
	I1205 06:48:09.446318  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.446325  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:09.446330  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:09.446409  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:09.472368  485986 cri.go:89] found id: ""
	I1205 06:48:09.472383  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.472390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:09.472395  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:09.472474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:09.497702  485986 cri.go:89] found id: ""
	I1205 06:48:09.497716  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.497722  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:09.497727  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:09.497783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:09.525679  485986 cri.go:89] found id: ""
	I1205 06:48:09.525693  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.525700  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:09.525706  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:09.525765  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:09.552628  485986 cri.go:89] found id: ""
	I1205 06:48:09.552643  485986 logs.go:282] 0 containers: []
	W1205 06:48:09.552650  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:09.552657  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:09.552667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:09.618085  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:09.618105  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:09.633067  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:09.633084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:09.696615  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:09.688707   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.689518   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691086   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.691392   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:09.692864   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:09.696626  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:09.696637  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:09.772055  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:09.772074  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.303940  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:12.314229  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:12.314298  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:12.348459  485986 cri.go:89] found id: ""
	I1205 06:48:12.348473  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.348480  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:12.348485  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:12.348543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:12.373284  485986 cri.go:89] found id: ""
	I1205 06:48:12.373299  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.373306  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:12.373311  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:12.373375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:12.398539  485986 cri.go:89] found id: ""
	I1205 06:48:12.398559  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.398566  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:12.398571  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:12.398635  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:12.423138  485986 cri.go:89] found id: ""
	I1205 06:48:12.423151  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.423158  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:12.423163  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:12.423223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:12.447667  485986 cri.go:89] found id: ""
	I1205 06:48:12.447680  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.447688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:12.447692  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:12.447751  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:12.472343  485986 cri.go:89] found id: ""
	I1205 06:48:12.472357  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.472364  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:12.472369  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:12.472425  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:12.497076  485986 cri.go:89] found id: ""
	I1205 06:48:12.497089  485986 logs.go:282] 0 containers: []
	W1205 06:48:12.497096  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:12.497102  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:12.497112  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:12.574451  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:12.574470  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:12.610910  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:12.610926  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:12.678117  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:12.678135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:12.692476  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:12.692492  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:12.758359  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:12.750295   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.750936   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.752531   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.753043   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:12.754596   13139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.258636  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:15.270043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:15.270103  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:15.304750  485986 cri.go:89] found id: ""
	I1205 06:48:15.304764  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.304771  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:15.304776  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:15.304832  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:15.344151  485986 cri.go:89] found id: ""
	I1205 06:48:15.344165  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.344172  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:15.344182  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:15.344249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:15.371527  485986 cri.go:89] found id: ""
	I1205 06:48:15.371541  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.371548  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:15.371553  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:15.371618  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:15.403495  485986 cri.go:89] found id: ""
	I1205 06:48:15.403508  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.403515  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:15.403521  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:15.403581  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:15.429409  485986 cri.go:89] found id: ""
	I1205 06:48:15.429424  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.429431  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:15.429436  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:15.429501  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:15.459234  485986 cri.go:89] found id: ""
	I1205 06:48:15.459248  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.459257  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:15.459263  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:15.459320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:15.488886  485986 cri.go:89] found id: ""
	I1205 06:48:15.488900  485986 logs.go:282] 0 containers: []
	W1205 06:48:15.488907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:15.488915  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:15.488925  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:15.556219  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:15.556239  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:15.571562  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:15.571579  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:15.635494  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:15.628155   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.628632   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630326   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.630665   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:15.632132   13230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:15.635504  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:15.635514  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:15.717719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:15.717740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.253466  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:18.263430  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:18.263491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:18.305028  485986 cri.go:89] found id: ""
	I1205 06:48:18.305042  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.305049  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:18.305054  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:18.305111  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:18.332689  485986 cri.go:89] found id: ""
	I1205 06:48:18.332702  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.332709  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:18.332715  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:18.332770  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:18.360205  485986 cri.go:89] found id: ""
	I1205 06:48:18.360220  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.360227  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:18.360232  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:18.360291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:18.385479  485986 cri.go:89] found id: ""
	I1205 06:48:18.385493  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.385500  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:18.385505  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:18.385560  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:18.413258  485986 cri.go:89] found id: ""
	I1205 06:48:18.413272  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.413279  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:18.413286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:18.413348  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:18.439018  485986 cri.go:89] found id: ""
	I1205 06:48:18.439032  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.439039  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:18.439044  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:18.439099  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:18.465311  485986 cri.go:89] found id: ""
	I1205 06:48:18.465324  485986 logs.go:282] 0 containers: []
	W1205 06:48:18.465341  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:18.465348  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:18.465359  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:18.479885  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:18.479902  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:18.543997  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:18.536169   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.536669   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538416   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.538850   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:18.540401   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:18.544007  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:18.544018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:18.620924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:18.620948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:18.655034  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:18.655050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.222770  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:21.233411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:21.233478  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:21.263287  485986 cri.go:89] found id: ""
	I1205 06:48:21.263302  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.263309  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:21.263315  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:21.263379  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:21.300915  485986 cri.go:89] found id: ""
	I1205 06:48:21.300929  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.300936  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:21.300941  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:21.301005  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:21.328975  485986 cri.go:89] found id: ""
	I1205 06:48:21.328989  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.328999  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:21.329004  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:21.329061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:21.358828  485986 cri.go:89] found id: ""
	I1205 06:48:21.358842  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.358849  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:21.358854  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:21.358914  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:21.384401  485986 cri.go:89] found id: ""
	I1205 06:48:21.384422  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.384429  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:21.384434  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:21.384491  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:21.409705  485986 cri.go:89] found id: ""
	I1205 06:48:21.409719  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.409726  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:21.409732  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:21.409791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:21.437633  485986 cri.go:89] found id: ""
	I1205 06:48:21.437650  485986 logs.go:282] 0 containers: []
	W1205 06:48:21.437658  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:21.437665  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:21.437675  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:21.515785  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:21.515808  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:21.549019  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:21.549035  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:21.620027  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:21.620048  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:21.635622  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:21.635638  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:21.710252  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:21.702462   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.703235   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.704737   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.705215   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:21.706750   13454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.210507  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:24.221002  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:24.221061  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:24.246259  485986 cri.go:89] found id: ""
	I1205 06:48:24.246273  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.246280  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:24.246285  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:24.246350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:24.274723  485986 cri.go:89] found id: ""
	I1205 06:48:24.274736  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.274743  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:24.274749  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:24.274807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:24.312165  485986 cri.go:89] found id: ""
	I1205 06:48:24.312179  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.312186  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:24.312191  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:24.312248  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:24.351913  485986 cri.go:89] found id: ""
	I1205 06:48:24.351927  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.351934  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:24.351939  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:24.351995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:24.377944  485986 cri.go:89] found id: ""
	I1205 06:48:24.377958  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.377966  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:24.377971  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:24.378029  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:24.403127  485986 cri.go:89] found id: ""
	I1205 06:48:24.403142  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.403149  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:24.403154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:24.403211  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:24.428745  485986 cri.go:89] found id: ""
	I1205 06:48:24.428760  485986 logs.go:282] 0 containers: []
	W1205 06:48:24.428777  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:24.428785  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:24.428795  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:24.495838  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:24.495860  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:24.511294  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:24.511309  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:24.577637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:24.569622   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.570426   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.571915   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.572368   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:24.573899   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:24.577647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:24.577658  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:24.664395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:24.664422  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:27.196552  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:27.206670  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:27.206729  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:27.232859  485986 cri.go:89] found id: ""
	I1205 06:48:27.232873  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.232880  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:27.232885  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:27.232944  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:27.261077  485986 cri.go:89] found id: ""
	I1205 06:48:27.261091  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.261098  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:27.261104  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:27.261157  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:27.299035  485986 cri.go:89] found id: ""
	I1205 06:48:27.299049  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.299056  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:27.299061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:27.299117  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:27.325080  485986 cri.go:89] found id: ""
	I1205 06:48:27.325094  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.325100  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:27.325105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:27.325165  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:27.355194  485986 cri.go:89] found id: ""
	I1205 06:48:27.355208  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.355215  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:27.355220  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:27.355281  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:27.380260  485986 cri.go:89] found id: ""
	I1205 06:48:27.380274  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.380281  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:27.380286  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:27.380340  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:27.404746  485986 cri.go:89] found id: ""
	I1205 06:48:27.404760  485986 logs.go:282] 0 containers: []
	W1205 06:48:27.404767  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:27.404774  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:27.404784  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:27.471214  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:27.471234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:27.486196  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:27.486213  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:27.549013  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:27.540412   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.541998   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.542711   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544155   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:27.544607   13650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:27.549023  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:27.549034  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:27.626719  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:27.626740  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.157779  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:30.168828  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:30.168888  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:30.196472  485986 cri.go:89] found id: ""
	I1205 06:48:30.196487  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.196494  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:30.196500  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:30.196561  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:30.222435  485986 cri.go:89] found id: ""
	I1205 06:48:30.222449  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.222456  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:30.222463  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:30.222521  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:30.252893  485986 cri.go:89] found id: ""
	I1205 06:48:30.252907  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.252914  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:30.252919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:30.252979  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:30.293703  485986 cri.go:89] found id: ""
	I1205 06:48:30.293717  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.293724  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:30.293729  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:30.293791  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:30.323711  485986 cri.go:89] found id: ""
	I1205 06:48:30.323724  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.323731  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:30.323746  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:30.323804  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:30.355817  485986 cri.go:89] found id: ""
	I1205 06:48:30.355831  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.355838  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:30.355844  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:30.355905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:30.384820  485986 cri.go:89] found id: ""
	I1205 06:48:30.384834  485986 logs.go:282] 0 containers: []
	W1205 06:48:30.384850  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:30.384858  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:30.384869  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:30.400554  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:30.400571  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:30.462509  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:30.454797   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.455349   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.456851   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.457304   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:30.458799   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:30.462519  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:30.462529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:30.539861  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:30.539884  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:30.572611  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:30.572627  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.142900  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:33.153456  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:33.153522  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:33.178913  485986 cri.go:89] found id: ""
	I1205 06:48:33.178926  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.178933  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:33.178939  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:33.178994  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:33.204173  485986 cri.go:89] found id: ""
	I1205 06:48:33.204187  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.204195  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:33.204200  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:33.204260  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:33.231661  485986 cri.go:89] found id: ""
	I1205 06:48:33.231675  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.231688  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:33.231693  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:33.231749  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:33.256100  485986 cri.go:89] found id: ""
	I1205 06:48:33.256113  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.256120  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:33.256125  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:33.256180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:33.288692  485986 cri.go:89] found id: ""
	I1205 06:48:33.288706  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.288713  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:33.288718  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:33.288778  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:33.322902  485986 cri.go:89] found id: ""
	I1205 06:48:33.322916  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.322931  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:33.322936  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:33.322995  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:33.354832  485986 cri.go:89] found id: ""
	I1205 06:48:33.354846  485986 logs.go:282] 0 containers: []
	W1205 06:48:33.354853  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:33.354861  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:33.354871  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:33.419523  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:33.419542  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:33.436533  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:33.436549  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:33.500717  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:33.492589   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.493351   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.494906   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.495229   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:33.497011   13859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:33.500727  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:33.500744  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:33.576166  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:33.576187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.103891  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:36.114026  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:36.114086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:36.138404  485986 cri.go:89] found id: ""
	I1205 06:48:36.138419  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.138426  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:36.138432  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:36.138490  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:36.165135  485986 cri.go:89] found id: ""
	I1205 06:48:36.165149  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.165156  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:36.165161  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:36.165218  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:36.190238  485986 cri.go:89] found id: ""
	I1205 06:48:36.190252  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.190259  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:36.190264  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:36.190323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:36.216962  485986 cri.go:89] found id: ""
	I1205 06:48:36.216975  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.216982  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:36.216987  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:36.217043  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:36.241075  485986 cri.go:89] found id: ""
	I1205 06:48:36.241089  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.241096  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:36.241107  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:36.241174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:36.267257  485986 cri.go:89] found id: ""
	I1205 06:48:36.267272  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.267278  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:36.267284  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:36.267350  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:36.293288  485986 cri.go:89] found id: ""
	I1205 06:48:36.293310  485986 logs.go:282] 0 containers: []
	W1205 06:48:36.293320  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:36.293327  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:36.293338  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:36.363749  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:36.356228   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.356654   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358204   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.358589   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:36.360031   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:36.363759  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:36.363769  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:36.438180  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:36.438203  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:36.466903  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:36.466919  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:36.532968  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:36.532989  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.048421  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:39.059045  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:39.059109  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:39.083511  485986 cri.go:89] found id: ""
	I1205 06:48:39.083526  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.083532  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:39.083537  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:39.083599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:39.107712  485986 cri.go:89] found id: ""
	I1205 06:48:39.107725  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.107732  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:39.107736  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:39.107793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:39.132566  485986 cri.go:89] found id: ""
	I1205 06:48:39.132580  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.132588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:39.132593  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:39.132650  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:39.161417  485986 cri.go:89] found id: ""
	I1205 06:48:39.161431  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.161438  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:39.161443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:39.161511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:39.186314  485986 cri.go:89] found id: ""
	I1205 06:48:39.186328  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.186335  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:39.186340  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:39.186428  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:39.210957  485986 cri.go:89] found id: ""
	I1205 06:48:39.210971  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.210980  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:39.210986  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:39.211044  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:39.236120  485986 cri.go:89] found id: ""
	I1205 06:48:39.236134  485986 logs.go:282] 0 containers: []
	W1205 06:48:39.236141  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:39.236148  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:39.236159  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:39.250894  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:39.250911  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:39.334545  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:39.318351   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.322965   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.323804   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.325552   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:39.326015   14062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:39.334556  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:39.334567  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:39.413949  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:39.413970  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:39.444354  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:39.444370  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.015174  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:42.026667  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:42.026732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:42.056643  485986 cri.go:89] found id: ""
	I1205 06:48:42.056658  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.056666  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:42.056672  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:42.056732  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:42.084714  485986 cri.go:89] found id: ""
	I1205 06:48:42.084731  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.084745  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:42.084750  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:42.084817  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:42.115735  485986 cri.go:89] found id: ""
	I1205 06:48:42.115750  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.115757  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:42.115763  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:42.115828  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:42.148687  485986 cri.go:89] found id: ""
	I1205 06:48:42.148703  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.148711  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:42.148717  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:42.148783  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:42.183060  485986 cri.go:89] found id: ""
	I1205 06:48:42.183076  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.183084  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:42.183089  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:42.183162  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:42.216582  485986 cri.go:89] found id: ""
	I1205 06:48:42.216598  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.216606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:42.216612  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:42.216684  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:42.247171  485986 cri.go:89] found id: ""
	I1205 06:48:42.247186  485986 logs.go:282] 0 containers: []
	W1205 06:48:42.247193  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:42.247201  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:42.247217  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:42.285459  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:42.285487  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:42.355504  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:42.355523  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:42.370693  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:42.370709  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:42.438568  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:42.429502   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.430264   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432148   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.432615   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:42.434364   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:42.438578  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:42.438588  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.014965  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:45.054270  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:45.054339  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:45.114057  485986 cri.go:89] found id: ""
	I1205 06:48:45.114075  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.114090  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:45.114097  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:45.114172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:45.165369  485986 cri.go:89] found id: ""
	I1205 06:48:45.165394  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.165402  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:45.165408  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:45.165494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:45.212325  485986 cri.go:89] found id: ""
	I1205 06:48:45.212342  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.212349  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:45.212355  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:45.212424  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:45.254096  485986 cri.go:89] found id: ""
	I1205 06:48:45.254114  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.254127  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:45.254134  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:45.254294  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:45.305666  485986 cri.go:89] found id: ""
	I1205 06:48:45.305681  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.305688  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:45.305694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:45.305753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:45.347701  485986 cri.go:89] found id: ""
	I1205 06:48:45.347715  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.347721  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:45.347726  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:45.347793  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:45.373745  485986 cri.go:89] found id: ""
	I1205 06:48:45.373760  485986 logs.go:282] 0 containers: []
	W1205 06:48:45.373775  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:45.373782  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:45.373793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:45.439756  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:45.439776  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:45.454781  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:45.454797  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:45.521815  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:45.514514   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.515029   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516480   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.516967   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:45.518548   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:45.521826  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:45.521838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:45.602427  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:45.602455  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.134541  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:48.144703  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:48.144768  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:48.169929  485986 cri.go:89] found id: ""
	I1205 06:48:48.169942  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.169949  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:48.169954  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:48.170014  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:48.194815  485986 cri.go:89] found id: ""
	I1205 06:48:48.194828  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.194835  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:48.194840  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:48.194898  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:48.220017  485986 cri.go:89] found id: ""
	I1205 06:48:48.220031  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.220038  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:48.220043  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:48.220101  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:48.249449  485986 cri.go:89] found id: ""
	I1205 06:48:48.249462  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.249470  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:48.249481  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:48.249552  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:48.284921  485986 cri.go:89] found id: ""
	I1205 06:48:48.284935  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.284942  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:48.284947  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:48.285006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:48.315138  485986 cri.go:89] found id: ""
	I1205 06:48:48.315152  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.315159  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:48.315164  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:48.315223  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:48.347265  485986 cri.go:89] found id: ""
	I1205 06:48:48.347279  485986 logs.go:282] 0 containers: []
	W1205 06:48:48.347286  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:48.347293  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:48.347304  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:48.375662  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:48.375678  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:48.440841  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:48.440863  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:48.456128  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:48.456144  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:48.523196  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:48.515425   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.515785   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.517359   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.518051   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:48.519586   14390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:48.523206  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:48.523216  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.100852  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:51.111413  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:51.111475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:51.139392  485986 cri.go:89] found id: ""
	I1205 06:48:51.139406  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.139414  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:51.139419  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:51.139483  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:51.167265  485986 cri.go:89] found id: ""
	I1205 06:48:51.167279  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.167286  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:51.167291  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:51.167347  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:51.192337  485986 cri.go:89] found id: ""
	I1205 06:48:51.192351  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.192358  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:51.192363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:51.192419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:51.217599  485986 cri.go:89] found id: ""
	I1205 06:48:51.217614  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.217621  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:51.217627  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:51.217683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:51.242555  485986 cri.go:89] found id: ""
	I1205 06:48:51.242568  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.242576  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:51.242580  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:51.242641  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:51.270447  485986 cri.go:89] found id: ""
	I1205 06:48:51.270462  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.270469  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:51.270474  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:51.270551  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:51.300340  485986 cri.go:89] found id: ""
	I1205 06:48:51.300353  485986 logs.go:282] 0 containers: []
	W1205 06:48:51.300360  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:51.300375  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:51.300385  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:51.373583  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:51.373604  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:51.388609  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:51.388624  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:51.449562  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:51.442150   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.442836   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444322   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.444649   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:51.446074   14484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:51.449572  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:51.449584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:51.523352  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:51.523373  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.052404  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:54.065168  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:54.065280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:54.097086  485986 cri.go:89] found id: ""
	I1205 06:48:54.097102  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.097109  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:54.097114  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:54.097173  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:54.128973  485986 cri.go:89] found id: ""
	I1205 06:48:54.128988  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.128995  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:54.129000  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:54.129066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:54.163279  485986 cri.go:89] found id: ""
	I1205 06:48:54.163294  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.163301  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:54.163305  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:54.163363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:54.200034  485986 cri.go:89] found id: ""
	I1205 06:48:54.200049  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.200056  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:54.200061  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:54.200119  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:54.232483  485986 cri.go:89] found id: ""
	I1205 06:48:54.232498  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.232504  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:54.232509  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:54.232572  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:54.256577  485986 cri.go:89] found id: ""
	I1205 06:48:54.256598  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.256606  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:54.256611  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:54.256673  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:54.288762  485986 cri.go:89] found id: ""
	I1205 06:48:54.288788  485986 logs.go:282] 0 containers: []
	W1205 06:48:54.288796  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:54.288804  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:54.288815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:54.368738  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:54.368758  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:54.395932  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:54.395948  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:48:54.464047  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:54.464066  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:54.479400  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:54.479416  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:54.546819  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:54.538668   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.539294   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.540985   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.541524   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:54.543065   14601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.047675  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:48:57.058076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:48:57.058143  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:48:57.082332  485986 cri.go:89] found id: ""
	I1205 06:48:57.082347  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.082355  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:48:57.082360  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:48:57.082442  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:48:57.108051  485986 cri.go:89] found id: ""
	I1205 06:48:57.108071  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.108078  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:48:57.108083  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:48:57.108139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:48:57.137107  485986 cri.go:89] found id: ""
	I1205 06:48:57.137129  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.137136  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:48:57.137141  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:48:57.137198  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:48:57.163240  485986 cri.go:89] found id: ""
	I1205 06:48:57.163272  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.163279  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:48:57.163285  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:48:57.163352  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:48:57.192699  485986 cri.go:89] found id: ""
	I1205 06:48:57.192725  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.192735  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:48:57.192740  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:48:57.192807  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:48:57.220916  485986 cri.go:89] found id: ""
	I1205 06:48:57.220931  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.220938  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:48:57.220943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:48:57.221010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:48:57.248028  485986 cri.go:89] found id: ""
	I1205 06:48:57.248042  485986 logs.go:282] 0 containers: []
	W1205 06:48:57.248049  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:48:57.248057  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:48:57.248068  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:48:57.262955  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:48:57.262971  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:48:57.355127  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:48:57.346596   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.347188   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.348974   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.349619   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:48:57.351449   14682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:48:57.355141  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:48:57.355151  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:48:57.433116  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:48:57.433135  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:48:57.464587  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:48:57.464603  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.033434  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:00.083145  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:00.083219  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:00.188573  485986 cri.go:89] found id: ""
	I1205 06:49:00.188591  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.188607  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:00.188613  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:00.188683  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:00.262241  485986 cri.go:89] found id: ""
	I1205 06:49:00.262258  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.262265  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:00.262271  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:00.262346  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:00.303849  485986 cri.go:89] found id: ""
	I1205 06:49:00.303866  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.303875  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:00.303881  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:00.303981  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:00.349047  485986 cri.go:89] found id: ""
	I1205 06:49:00.349063  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.349071  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:00.349076  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:00.349147  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:00.379299  485986 cri.go:89] found id: ""
	I1205 06:49:00.379317  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.379325  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:00.379332  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:00.379419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:00.409559  485986 cri.go:89] found id: ""
	I1205 06:49:00.409575  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.409582  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:00.409589  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:00.409656  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:00.439884  485986 cri.go:89] found id: ""
	I1205 06:49:00.439899  485986 logs.go:282] 0 containers: []
	W1205 06:49:00.439907  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:00.439916  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:00.439933  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:00.508652  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:00.508672  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:00.524482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:00.524504  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:00.586066  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:00.578087   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.578919   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.580633   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.581135   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:00.582631   14795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:00.586076  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:00.586087  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:00.663208  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:00.663229  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:03.193638  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:03.204025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:03.204086  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:03.228565  485986 cri.go:89] found id: ""
	I1205 06:49:03.228579  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.228586  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:03.228592  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:03.228649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:03.254850  485986 cri.go:89] found id: ""
	I1205 06:49:03.254864  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.254871  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:03.254876  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:03.254937  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:03.289088  485986 cri.go:89] found id: ""
	I1205 06:49:03.289101  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.289108  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:03.289113  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:03.289194  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:03.322876  485986 cri.go:89] found id: ""
	I1205 06:49:03.322891  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.322905  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:03.322910  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:03.322971  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:03.352868  485986 cri.go:89] found id: ""
	I1205 06:49:03.352883  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.352890  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:03.352895  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:03.352957  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:03.381474  485986 cri.go:89] found id: ""
	I1205 06:49:03.381495  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.381502  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:03.381508  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:03.381569  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:03.410037  485986 cri.go:89] found id: ""
	I1205 06:49:03.410051  485986 logs.go:282] 0 containers: []
	W1205 06:49:03.410058  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:03.410071  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:03.410081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:03.479009  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:03.479028  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:03.493685  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:03.493702  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:03.561170  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:03.553306   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.554220   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.555759   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.556128   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:03.557670   14899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:03.561179  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:03.561190  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:03.638291  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:03.638315  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.175002  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:06.185259  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:06.185319  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:06.215092  485986 cri.go:89] found id: ""
	I1205 06:49:06.215106  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.215113  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:06.215119  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:06.215175  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:06.245195  485986 cri.go:89] found id: ""
	I1205 06:49:06.245209  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.245216  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:06.245221  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:06.245283  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:06.271319  485986 cri.go:89] found id: ""
	I1205 06:49:06.271333  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.271340  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:06.271346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:06.271404  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:06.300132  485986 cri.go:89] found id: ""
	I1205 06:49:06.300146  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.300152  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:06.300158  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:06.300216  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:06.337931  485986 cri.go:89] found id: ""
	I1205 06:49:06.337945  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.337952  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:06.337957  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:06.338017  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:06.365963  485986 cri.go:89] found id: ""
	I1205 06:49:06.365978  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.365985  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:06.365991  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:06.366048  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:06.396366  485986 cri.go:89] found id: ""
	I1205 06:49:06.396382  485986 logs.go:282] 0 containers: []
	W1205 06:49:06.396389  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:06.396397  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:06.396410  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:06.424940  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:06.424956  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:06.490847  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:06.490864  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:06.506209  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:06.506225  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:06.572331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:06.564878   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.565472   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.566969   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.567471   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:06.568900   15014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:06.572342  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:06.572352  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.157509  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:09.167469  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:09.167529  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:09.192289  485986 cri.go:89] found id: ""
	I1205 06:49:09.192304  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.192311  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:09.192316  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:09.192375  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:09.217082  485986 cri.go:89] found id: ""
	I1205 06:49:09.217096  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.217103  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:09.217108  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:09.217167  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:09.242357  485986 cri.go:89] found id: ""
	I1205 06:49:09.242371  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.242412  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:09.242417  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:09.242474  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:09.267197  485986 cri.go:89] found id: ""
	I1205 06:49:09.267211  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.267218  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:09.267223  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:09.267282  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:09.302740  485986 cri.go:89] found id: ""
	I1205 06:49:09.302754  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.302761  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:09.302766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:09.302824  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:09.338883  485986 cri.go:89] found id: ""
	I1205 06:49:09.338910  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.338917  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:09.338923  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:09.338988  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:09.365834  485986 cri.go:89] found id: ""
	I1205 06:49:09.365848  485986 logs.go:282] 0 containers: []
	W1205 06:49:09.365855  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:09.365862  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:09.365872  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:09.433408  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:09.433430  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:09.448763  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:09.448785  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:09.510400  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:09.502828   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.503352   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505004   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.505431   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:09.506857   15107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:09.510413  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:09.510424  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:09.589135  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:09.589155  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:12.118439  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:12.128584  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:12.128642  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:12.153045  485986 cri.go:89] found id: ""
	I1205 06:49:12.153059  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.153066  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:12.153071  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:12.153138  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:12.181785  485986 cri.go:89] found id: ""
	I1205 06:49:12.181798  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.181805  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:12.181810  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:12.181867  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:12.208813  485986 cri.go:89] found id: ""
	I1205 06:49:12.208827  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.208834  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:12.208845  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:12.208903  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:12.234917  485986 cri.go:89] found id: ""
	I1205 06:49:12.234931  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.234938  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:12.234943  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:12.235004  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:12.260438  485986 cri.go:89] found id: ""
	I1205 06:49:12.260452  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.260459  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:12.260464  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:12.260531  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:12.296968  485986 cri.go:89] found id: ""
	I1205 06:49:12.296981  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.296988  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:12.296994  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:12.297050  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:12.333915  485986 cri.go:89] found id: ""
	I1205 06:49:12.333929  485986 logs.go:282] 0 containers: []
	W1205 06:49:12.333936  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:12.333943  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:12.333953  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:12.406977  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:12.406998  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:12.422290  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:12.422306  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:12.488646  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:12.480809   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.481450   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.482905   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.483507   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:12.485097   15217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:12.488656  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:12.488666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:12.564028  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:12.564050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.095313  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:15.105802  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:15.105864  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:15.133035  485986 cri.go:89] found id: ""
	I1205 06:49:15.133049  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.133057  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:15.133062  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:15.133118  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:15.158425  485986 cri.go:89] found id: ""
	I1205 06:49:15.158439  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.158446  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:15.158451  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:15.158507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:15.183550  485986 cri.go:89] found id: ""
	I1205 06:49:15.183564  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.183571  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:15.183576  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:15.183637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:15.209390  485986 cri.go:89] found id: ""
	I1205 06:49:15.209405  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.209413  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:15.209418  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:15.209481  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:15.234806  485986 cri.go:89] found id: ""
	I1205 06:49:15.234820  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.234828  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:15.234833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:15.234893  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:15.260606  485986 cri.go:89] found id: ""
	I1205 06:49:15.260621  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.260628  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:15.260633  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:15.260689  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:15.291752  485986 cri.go:89] found id: ""
	I1205 06:49:15.291766  485986 logs.go:282] 0 containers: []
	W1205 06:49:15.291773  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:15.291782  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:15.291793  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:15.308482  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:15.308499  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:15.380232  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:15.372488   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.372953   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374118   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.374587   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:15.376095   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:15.380242  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:15.380253  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:15.456924  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:15.456947  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:15.486075  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:15.486091  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.055175  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:18.065657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:18.065716  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:18.092418  485986 cri.go:89] found id: ""
	I1205 06:49:18.092432  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.092440  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:18.092445  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:18.092504  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:18.119095  485986 cri.go:89] found id: ""
	I1205 06:49:18.119109  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.119116  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:18.119120  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:18.119174  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:18.158317  485986 cri.go:89] found id: ""
	I1205 06:49:18.158331  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.158338  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:18.158343  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:18.158435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:18.182920  485986 cri.go:89] found id: ""
	I1205 06:49:18.182934  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.182941  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:18.182946  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:18.183006  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:18.209415  485986 cri.go:89] found id: ""
	I1205 06:49:18.209430  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.209438  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:18.209443  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:18.209512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:18.236631  485986 cri.go:89] found id: ""
	I1205 06:49:18.236644  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.236651  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:18.236656  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:18.236713  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:18.262726  485986 cri.go:89] found id: ""
	I1205 06:49:18.262740  485986 logs.go:282] 0 containers: []
	W1205 06:49:18.262747  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:18.262754  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:18.262765  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:18.339996  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:18.340018  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:18.358676  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:18.358696  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:18.426638  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:18.417748   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.418455   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420167   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.420749   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:18.422549   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:18.426647  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:18.426706  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:18.504263  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:18.504284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:21.036369  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:21.046428  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:21.046488  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:21.071147  485986 cri.go:89] found id: ""
	I1205 06:49:21.071161  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.071168  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:21.071173  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:21.071235  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:21.095397  485986 cri.go:89] found id: ""
	I1205 06:49:21.095412  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.095421  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:21.095426  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:21.095485  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:21.119759  485986 cri.go:89] found id: ""
	I1205 06:49:21.119773  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.119780  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:21.119786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:21.119850  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:21.144972  485986 cri.go:89] found id: ""
	I1205 06:49:21.144986  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.144993  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:21.144998  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:21.145054  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:21.170022  485986 cri.go:89] found id: ""
	I1205 06:49:21.170035  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.170042  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:21.170047  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:21.170104  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:21.198867  485986 cri.go:89] found id: ""
	I1205 06:49:21.198881  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.198887  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:21.198893  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:21.198948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:21.224547  485986 cri.go:89] found id: ""
	I1205 06:49:21.224561  485986 logs.go:282] 0 containers: []
	W1205 06:49:21.224568  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:21.224575  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:21.224585  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:21.291060  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:21.291081  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:21.308799  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:21.308815  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:21.380254  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:21.371835   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.372583   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374223   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.374739   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:21.376207   15536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:21.380264  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:21.380275  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:21.456817  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:21.456838  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:23.986703  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:23.996959  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:23.997028  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:24.026421  485986 cri.go:89] found id: ""
	I1205 06:49:24.026435  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.026443  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:24.026450  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:24.026512  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:24.056568  485986 cri.go:89] found id: ""
	I1205 06:49:24.056582  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.056589  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:24.056595  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:24.056654  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:24.082518  485986 cri.go:89] found id: ""
	I1205 06:49:24.082532  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.082539  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:24.082544  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:24.082605  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:24.108752  485986 cri.go:89] found id: ""
	I1205 06:49:24.108766  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.108783  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:24.108788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:24.108854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:24.142101  485986 cri.go:89] found id: ""
	I1205 06:49:24.142133  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.142140  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:24.142146  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:24.142214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:24.169035  485986 cri.go:89] found id: ""
	I1205 06:49:24.169050  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.169057  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:24.169067  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:24.169139  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:24.194140  485986 cri.go:89] found id: ""
	I1205 06:49:24.194154  485986 logs.go:282] 0 containers: []
	W1205 06:49:24.194161  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:24.194169  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:24.194179  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:24.269020  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:24.269042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:24.319041  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:24.319057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:24.402423  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:24.402446  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:24.418669  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:24.418687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:24.486837  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:24.479049   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.479672   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481244   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.481834   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:24.483301   15654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:26.988496  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:26.998567  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:26.998632  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:27.030118  485986 cri.go:89] found id: ""
	I1205 06:49:27.030131  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.030138  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:27.030144  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:27.030200  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:27.057209  485986 cri.go:89] found id: ""
	I1205 06:49:27.057224  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.057230  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:27.057236  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:27.057291  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:27.083393  485986 cri.go:89] found id: ""
	I1205 06:49:27.083408  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.083415  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:27.083420  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:27.083480  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:27.108369  485986 cri.go:89] found id: ""
	I1205 06:49:27.108383  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.108390  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:27.108394  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:27.108454  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:27.136631  485986 cri.go:89] found id: ""
	I1205 06:49:27.136645  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.136653  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:27.136659  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:27.136726  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:27.163262  485986 cri.go:89] found id: ""
	I1205 06:49:27.163277  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.163286  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:27.163294  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:27.163353  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:27.188133  485986 cri.go:89] found id: ""
	I1205 06:49:27.188152  485986 logs.go:282] 0 containers: []
	W1205 06:49:27.188160  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:27.188167  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:27.188177  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:27.252259  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:27.244740   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.245127   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.246802   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.247149   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:27.248724   15735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:27.252270  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:27.252280  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:27.330222  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:27.330243  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:27.360158  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:27.360174  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:27.433608  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:27.433628  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:29.949566  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:29.960768  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:29.960834  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:29.986155  485986 cri.go:89] found id: ""
	I1205 06:49:29.986169  485986 logs.go:282] 0 containers: []
	W1205 06:49:29.986176  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:29.986181  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:29.986241  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:30.063119  485986 cri.go:89] found id: ""
	I1205 06:49:30.063137  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.063144  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:30.063163  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:30.063243  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:30.093759  485986 cri.go:89] found id: ""
	I1205 06:49:30.093774  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.093782  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:30.093788  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:30.093860  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:30.123430  485986 cri.go:89] found id: ""
	I1205 06:49:30.123452  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.123460  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:30.123465  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:30.123554  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:30.151722  485986 cri.go:89] found id: ""
	I1205 06:49:30.151744  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.151752  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:30.151758  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:30.151820  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:30.186802  485986 cri.go:89] found id: ""
	I1205 06:49:30.186831  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.186852  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:30.186859  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:30.186929  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:30.213270  485986 cri.go:89] found id: ""
	I1205 06:49:30.213293  485986 logs.go:282] 0 containers: []
	W1205 06:49:30.213301  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:30.213309  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:30.213320  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:30.279872  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:30.279893  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:30.296737  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:30.296759  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:30.374429  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:30.364333   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.365104   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367064   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.367828   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:30.369652   15851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:30.374439  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:30.374450  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:30.450678  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:30.450701  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:32.984051  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:32.993990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:32.994049  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:33.020636  485986 cri.go:89] found id: ""
	I1205 06:49:33.020650  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.020657  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:33.020663  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:33.020719  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:33.049013  485986 cri.go:89] found id: ""
	I1205 06:49:33.049027  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.049034  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:33.049039  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:33.049098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:33.078567  485986 cri.go:89] found id: ""
	I1205 06:49:33.078581  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.078588  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:33.078594  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:33.078652  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:33.103212  485986 cri.go:89] found id: ""
	I1205 06:49:33.103226  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.103233  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:33.103238  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:33.103293  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:33.127983  485986 cri.go:89] found id: ""
	I1205 06:49:33.127997  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.128004  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:33.128030  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:33.128085  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:33.153777  485986 cri.go:89] found id: ""
	I1205 06:49:33.153792  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.153799  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:33.153805  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:33.153863  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:33.178536  485986 cri.go:89] found id: ""
	I1205 06:49:33.178550  485986 logs.go:282] 0 containers: []
	W1205 06:49:33.178557  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:33.178565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:33.178576  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:33.244570  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:33.244594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:33.259835  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:33.259851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:33.338788  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:33.330420   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.331279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333021   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.333317   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:33.335279   15953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:33.338799  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:33.338810  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:33.425207  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:33.425236  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:35.956397  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:35.966480  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:35.966543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:35.995353  485986 cri.go:89] found id: ""
	I1205 06:49:35.995367  485986 logs.go:282] 0 containers: []
	W1205 06:49:35.995374  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:35.995378  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:35.995435  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:36.024388  485986 cri.go:89] found id: ""
	I1205 06:49:36.024403  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.024410  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:36.024415  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:36.024477  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:36.051022  485986 cri.go:89] found id: ""
	I1205 06:49:36.051036  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.051054  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:36.051059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:36.051124  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:36.076096  485986 cri.go:89] found id: ""
	I1205 06:49:36.076110  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.076117  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:36.076123  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:36.076180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:36.105105  485986 cri.go:89] found id: ""
	I1205 06:49:36.105119  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.105127  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:36.105131  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:36.105187  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:36.131094  485986 cri.go:89] found id: ""
	I1205 06:49:36.131107  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.131114  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:36.131120  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:36.131180  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:36.160327  485986 cri.go:89] found id: ""
	I1205 06:49:36.160342  485986 logs.go:282] 0 containers: []
	W1205 06:49:36.160349  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:36.160357  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:36.160367  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:36.175190  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:36.175205  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:36.236428  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:36.228915   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.229566   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231085   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.231523   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:36.232984   16055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:36.236479  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:36.236489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:36.320584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:36.320608  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:36.354951  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:36.354968  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:38.924529  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:38.934948  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:38.935008  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:38.961612  485986 cri.go:89] found id: ""
	I1205 06:49:38.961626  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.961633  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:38.961638  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:38.961699  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:38.987542  485986 cri.go:89] found id: ""
	I1205 06:49:38.987562  485986 logs.go:282] 0 containers: []
	W1205 06:49:38.987569  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:38.987574  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:38.987637  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:39.017388  485986 cri.go:89] found id: ""
	I1205 06:49:39.017402  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.017409  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:39.017414  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:39.017475  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:39.043798  485986 cri.go:89] found id: ""
	I1205 06:49:39.043813  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.043821  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:39.043826  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:39.043883  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:39.072134  485986 cri.go:89] found id: ""
	I1205 06:49:39.072148  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.072155  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:39.072160  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:39.072214  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:39.097127  485986 cri.go:89] found id: ""
	I1205 06:49:39.097141  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.097148  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:39.097154  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:39.097215  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:39.125406  485986 cri.go:89] found id: ""
	I1205 06:49:39.125420  485986 logs.go:282] 0 containers: []
	W1205 06:49:39.125427  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:39.125434  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:39.125447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:39.191762  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:39.191782  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:39.206972  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:39.206987  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:39.274830  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:39.266057   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.266571   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.267713   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.268169   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:39.269672   16161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:39.274841  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:39.274851  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:39.365052  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:39.365073  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:41.896143  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:41.906833  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:41.906906  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:41.934416  485986 cri.go:89] found id: ""
	I1205 06:49:41.934430  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.934437  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:41.934442  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:41.934498  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:41.962049  485986 cri.go:89] found id: ""
	I1205 06:49:41.962063  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.962079  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:41.962084  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:41.962150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:41.991028  485986 cri.go:89] found id: ""
	I1205 06:49:41.991042  485986 logs.go:282] 0 containers: []
	W1205 06:49:41.991049  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:41.991053  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:41.991121  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:42.020514  485986 cri.go:89] found id: ""
	I1205 06:49:42.020536  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.020544  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:42.020550  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:42.020614  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:42.047453  485986 cri.go:89] found id: ""
	I1205 06:49:42.047467  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.047474  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:42.047479  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:42.047535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:42.079004  485986 cri.go:89] found id: ""
	I1205 06:49:42.079019  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.079026  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:42.079033  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:42.079098  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:42.110781  485986 cri.go:89] found id: ""
	I1205 06:49:42.110806  485986 logs.go:282] 0 containers: []
	W1205 06:49:42.110814  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:42.110821  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:42.110832  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:42.191665  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:42.191688  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:42.241592  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:42.241609  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:42.314021  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:42.314041  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:42.331123  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:42.331139  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:42.401371  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:42.393586   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.394295   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.395948   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.396255   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:42.397758   16284 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:44.902557  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:44.913856  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:44.913928  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:44.944329  485986 cri.go:89] found id: ""
	I1205 06:49:44.944343  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.944350  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:44.944355  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:44.944411  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:44.972877  485986 cri.go:89] found id: ""
	I1205 06:49:44.972890  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.972897  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:44.972902  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:44.972961  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:44.997771  485986 cri.go:89] found id: ""
	I1205 06:49:44.997785  485986 logs.go:282] 0 containers: []
	W1205 06:49:44.997792  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:44.997797  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:44.997858  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:45.044196  485986 cri.go:89] found id: ""
	I1205 06:49:45.044212  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.044220  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:45.044225  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:45.044296  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:45.100218  485986 cri.go:89] found id: ""
	I1205 06:49:45.100234  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.100242  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:45.100247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:45.100322  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:45.143680  485986 cri.go:89] found id: ""
	I1205 06:49:45.143696  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.143704  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:45.143710  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:45.144010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:45.184794  485986 cri.go:89] found id: ""
	I1205 06:49:45.184810  485986 logs.go:282] 0 containers: []
	W1205 06:49:45.184818  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:45.184827  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:45.184840  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:45.266987  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:45.267020  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:45.286876  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:45.286913  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:45.370968  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:45.363581   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.364305   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.365832   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.366292   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:45.367509   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:45.370979  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:45.370991  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:45.446768  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:45.446788  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:47.979096  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:47.989170  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:47.989236  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:48.018828  485986 cri.go:89] found id: ""
	I1205 06:49:48.018841  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.018849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:48.018854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:48.018915  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:48.048874  485986 cri.go:89] found id: ""
	I1205 06:49:48.048888  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.048895  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:48.048901  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:48.048960  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:48.075707  485986 cri.go:89] found id: ""
	I1205 06:49:48.075722  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.075728  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:48.075733  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:48.075792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:48.100630  485986 cri.go:89] found id: ""
	I1205 06:49:48.100644  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.100651  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:48.100657  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:48.100715  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:48.126176  485986 cri.go:89] found id: ""
	I1205 06:49:48.126190  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.126197  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:48.126202  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:48.126266  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:48.153143  485986 cri.go:89] found id: ""
	I1205 06:49:48.153157  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.153170  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:48.153181  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:48.153249  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:48.179066  485986 cri.go:89] found id: ""
	I1205 06:49:48.179080  485986 logs.go:282] 0 containers: []
	W1205 06:49:48.179087  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:48.179094  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:48.179104  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:48.238867  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:48.231394   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.232041   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233115   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.233702   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:48.235281   16469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:48.238878  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:48.238892  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:48.318473  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:48.318493  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:48.351978  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:48.352000  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:48.421167  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:48.421187  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:50.939180  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:50.949233  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:50.949290  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:50.978828  485986 cri.go:89] found id: ""
	I1205 06:49:50.978842  485986 logs.go:282] 0 containers: []
	W1205 06:49:50.978849  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:50.978854  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:50.978910  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:51.004445  485986 cri.go:89] found id: ""
	I1205 06:49:51.004461  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.004469  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:51.004475  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:51.004545  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:51.032998  485986 cri.go:89] found id: ""
	I1205 06:49:51.033012  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.033019  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:51.033025  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:51.033080  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:51.058907  485986 cri.go:89] found id: ""
	I1205 06:49:51.058921  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.058929  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:51.058934  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:51.058998  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:51.088751  485986 cri.go:89] found id: ""
	I1205 06:49:51.088765  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.088773  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:51.088778  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:51.088836  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:51.114739  485986 cri.go:89] found id: ""
	I1205 06:49:51.114753  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.114760  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:51.114766  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:51.114827  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:51.146228  485986 cri.go:89] found id: ""
	I1205 06:49:51.146242  485986 logs.go:282] 0 containers: []
	W1205 06:49:51.146249  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:51.146257  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:51.146267  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:51.213460  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:51.213479  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:51.228827  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:51.228842  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:51.295308  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:51.287335   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.288164   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.289832   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.290165   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:51.291647   16578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:51.295318  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:51.295328  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:51.378866  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:51.378887  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:53.908370  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:53.918562  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:53.918621  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:53.944262  485986 cri.go:89] found id: ""
	I1205 06:49:53.944277  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.944284  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:53.944289  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:53.944349  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:53.969495  485986 cri.go:89] found id: ""
	I1205 06:49:53.969509  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.969516  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:53.969522  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:53.969602  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:53.996074  485986 cri.go:89] found id: ""
	I1205 06:49:53.996088  485986 logs.go:282] 0 containers: []
	W1205 06:49:53.996095  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:53.996100  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:53.996155  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:54.023768  485986 cri.go:89] found id: ""
	I1205 06:49:54.023783  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.023790  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:54.023796  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:54.023854  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:54.048370  485986 cri.go:89] found id: ""
	I1205 06:49:54.048385  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.048392  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:54.048397  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:54.048458  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:54.073241  485986 cri.go:89] found id: ""
	I1205 06:49:54.073255  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.073263  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:54.073268  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:54.073329  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:54.098794  485986 cri.go:89] found id: ""
	I1205 06:49:54.098808  485986 logs.go:282] 0 containers: []
	W1205 06:49:54.098816  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:54.098824  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:54.098833  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:54.165835  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:54.165854  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:54.181432  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:54.181447  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:54.255506  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:54.247030   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.247865   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.249614   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.250263   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:54.251991   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:54.255516  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:54.255529  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:54.341643  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:54.341666  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:56.871077  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:56.883786  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:56.883848  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:56.913242  485986 cri.go:89] found id: ""
	I1205 06:49:56.913255  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.913262  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:56.913268  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:56.913325  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:56.940834  485986 cri.go:89] found id: ""
	I1205 06:49:56.940849  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.940856  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:56.940863  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:56.940923  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:56.969612  485986 cri.go:89] found id: ""
	I1205 06:49:56.969626  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.969633  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:56.969639  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:56.969698  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:56.996324  485986 cri.go:89] found id: ""
	I1205 06:49:56.996338  485986 logs.go:282] 0 containers: []
	W1205 06:49:56.996345  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:56.996351  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:56.996412  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:57.023385  485986 cri.go:89] found id: ""
	I1205 06:49:57.023399  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.023407  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:57.023412  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:57.023470  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:49:57.047721  485986 cri.go:89] found id: ""
	I1205 06:49:57.047734  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.047741  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:49:57.047747  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:49:57.047803  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:49:57.072770  485986 cri.go:89] found id: ""
	I1205 06:49:57.072783  485986 logs.go:282] 0 containers: []
	W1205 06:49:57.072790  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:49:57.072798  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:49:57.072807  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:49:57.137878  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:49:57.137898  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:49:57.153088  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:49:57.153110  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:49:57.215030  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:49:57.207293   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.208101   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.209770   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.210073   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:49:57.211546   16791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:49:57.215041  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:49:57.215057  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:49:57.298537  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:49:57.298556  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:49:59.836134  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:49:59.846404  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:49:59.846463  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:49:59.871308  485986 cri.go:89] found id: ""
	I1205 06:49:59.871322  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.871329  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:49:59.871333  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:49:59.871389  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:49:59.897753  485986 cri.go:89] found id: ""
	I1205 06:49:59.897767  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.897774  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:49:59.897779  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:49:59.897840  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:49:59.922634  485986 cri.go:89] found id: ""
	I1205 06:49:59.922649  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.922655  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:49:59.922661  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:49:59.922721  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:49:59.946450  485986 cri.go:89] found id: ""
	I1205 06:49:59.946463  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.946473  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:49:59.946478  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:49:59.946535  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:49:59.972723  485986 cri.go:89] found id: ""
	I1205 06:49:59.972738  485986 logs.go:282] 0 containers: []
	W1205 06:49:59.972745  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:49:59.972750  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:49:59.972809  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:00.021990  485986 cri.go:89] found id: ""
	I1205 06:50:00.022006  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.022014  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:00.022020  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:00.022097  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:00.144138  485986 cri.go:89] found id: ""
	I1205 06:50:00.144154  485986 logs.go:282] 0 containers: []
	W1205 06:50:00.144162  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:00.144171  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:00.144184  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:00.257253  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:00.257284  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:00.303408  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:00.303429  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:00.439913  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:00.430535   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.431750   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.433629   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.434046   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:00.435764   16900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:00.439925  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:00.439937  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:00.532383  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:00.532408  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:03.067932  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:03.078353  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:03.078441  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:03.108943  485986 cri.go:89] found id: ""
	I1205 06:50:03.108957  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.108964  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:03.108969  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:03.109032  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:03.139046  485986 cri.go:89] found id: ""
	I1205 06:50:03.139060  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.139077  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:03.139082  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:03.139150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:03.166455  485986 cri.go:89] found id: ""
	I1205 06:50:03.166470  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.166479  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:03.166485  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:03.166587  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:03.195955  485986 cri.go:89] found id: ""
	I1205 06:50:03.195969  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.195976  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:03.195981  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:03.196037  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:03.221513  485986 cri.go:89] found id: ""
	I1205 06:50:03.221527  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.221539  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:03.221545  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:03.221616  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:03.250570  485986 cri.go:89] found id: ""
	I1205 06:50:03.250583  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.250589  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:03.250595  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:03.250649  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:03.278449  485986 cri.go:89] found id: ""
	I1205 06:50:03.278463  485986 logs.go:282] 0 containers: []
	W1205 06:50:03.278470  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:03.278477  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:03.278488  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:03.355784  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:03.355803  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:03.375344  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:03.375365  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:03.438665  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:03.431058   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.431854   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433418   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.433752   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:03.435269   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:03.438679  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:03.438690  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:03.518012  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:03.518040  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:06.053429  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:06.064448  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:06.064511  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:06.091072  485986 cri.go:89] found id: ""
	I1205 06:50:06.091087  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.091094  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:06.091100  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:06.091166  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:06.119823  485986 cri.go:89] found id: ""
	I1205 06:50:06.119837  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.119844  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:06.119849  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:06.119905  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:06.148798  485986 cri.go:89] found id: ""
	I1205 06:50:06.148812  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.148819  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:06.148824  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:06.148880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:06.179319  485986 cri.go:89] found id: ""
	I1205 06:50:06.179334  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.179341  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:06.179346  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:06.179402  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:06.204637  485986 cri.go:89] found id: ""
	I1205 06:50:06.204652  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.204659  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:06.204665  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:06.204727  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:06.232891  485986 cri.go:89] found id: ""
	I1205 06:50:06.232906  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.232913  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:06.232919  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:06.232977  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:06.260874  485986 cri.go:89] found id: ""
	I1205 06:50:06.260888  485986 logs.go:282] 0 containers: []
	W1205 06:50:06.260895  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:06.260904  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:06.260914  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:06.331930  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:06.331950  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:06.349062  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:06.349078  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:06.413245  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:06.404839   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.405471   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407216   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.407836   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:06.409486   17108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:06.413254  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:06.413265  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:06.491562  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:06.491584  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:09.021435  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:09.031990  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:09.032051  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:09.057732  485986 cri.go:89] found id: ""
	I1205 06:50:09.057746  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.057753  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:09.057758  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:09.057814  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:09.085296  485986 cri.go:89] found id: ""
	I1205 06:50:09.085309  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.085316  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:09.085321  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:09.085377  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:09.113133  485986 cri.go:89] found id: ""
	I1205 06:50:09.113147  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.113154  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:09.113159  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:09.113221  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:09.139103  485986 cri.go:89] found id: ""
	I1205 06:50:09.139117  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.139125  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:09.139130  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:09.139196  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:09.171980  485986 cri.go:89] found id: ""
	I1205 06:50:09.171995  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.172005  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:09.172011  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:09.172066  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:09.197034  485986 cri.go:89] found id: ""
	I1205 06:50:09.197048  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.197055  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:09.197059  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:09.197122  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:09.222626  485986 cri.go:89] found id: ""
	I1205 06:50:09.222641  485986 logs.go:282] 0 containers: []
	W1205 06:50:09.222649  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:09.222656  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:09.222667  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:09.288268  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:09.288287  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:09.304011  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:09.304027  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:09.378142  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:09.369828   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.370439   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372261   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.372817   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:09.374506   17213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:09.378151  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:09.378162  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:09.455057  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:09.455077  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:11.984604  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:11.994696  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:11.994758  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:12.021691  485986 cri.go:89] found id: ""
	I1205 06:50:12.021706  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.021713  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:12.021718  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:12.021777  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:12.049086  485986 cri.go:89] found id: ""
	I1205 06:50:12.049099  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.049106  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:12.049111  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:12.049170  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:12.077335  485986 cri.go:89] found id: ""
	I1205 06:50:12.077348  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.077355  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:12.077360  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:12.077419  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:12.104976  485986 cri.go:89] found id: ""
	I1205 06:50:12.104990  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.104998  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:12.105003  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:12.105065  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:12.130275  485986 cri.go:89] found id: ""
	I1205 06:50:12.130289  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.130297  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:12.130303  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:12.130359  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:12.156777  485986 cri.go:89] found id: ""
	I1205 06:50:12.156791  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.156798  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:12.156804  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:12.156862  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:12.184468  485986 cri.go:89] found id: ""
	I1205 06:50:12.184482  485986 logs.go:282] 0 containers: []
	W1205 06:50:12.184489  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:12.184496  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:12.184506  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:12.250190  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:12.250212  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:12.265279  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:12.265295  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:12.350637  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:12.342053   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.342918   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.344705   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.345237   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:12.346914   17315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:12.350648  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:12.350659  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:12.429523  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:12.429548  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:14.958454  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:14.970034  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:14.970110  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:14.996731  485986 cri.go:89] found id: ""
	I1205 06:50:14.996754  485986 logs.go:282] 0 containers: []
	W1205 06:50:14.996761  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:14.996767  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:14.996833  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:15.032417  485986 cri.go:89] found id: ""
	I1205 06:50:15.032440  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.032448  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:15.032454  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:15.032524  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:15.060989  485986 cri.go:89] found id: ""
	I1205 06:50:15.061008  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.061016  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:15.061022  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:15.061083  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:15.088194  485986 cri.go:89] found id: ""
	I1205 06:50:15.088208  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.088215  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:15.088221  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:15.088280  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:15.115923  485986 cri.go:89] found id: ""
	I1205 06:50:15.115938  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.115945  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:15.115951  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:15.116010  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:15.146014  485986 cri.go:89] found id: ""
	I1205 06:50:15.146028  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.146035  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:15.146041  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:15.146150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:15.173160  485986 cri.go:89] found id: ""
	I1205 06:50:15.173175  485986 logs.go:282] 0 containers: []
	W1205 06:50:15.173191  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:15.173199  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:15.173208  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:15.245690  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:15.237281   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.237912   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.239571   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.240233   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:15.241922   17413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:15.245700  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:15.245710  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:15.325395  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:15.325417  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:15.356222  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:15.356276  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:15.428176  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:15.428198  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:17.943733  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:17.954302  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:17.954363  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:17.979858  485986 cri.go:89] found id: ""
	I1205 06:50:17.979872  485986 logs.go:282] 0 containers: []
	W1205 06:50:17.979879  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:17.979884  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:17.979948  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:18.013482  485986 cri.go:89] found id: ""
	I1205 06:50:18.013497  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.013504  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:18.013509  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:18.013593  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:18.040079  485986 cri.go:89] found id: ""
	I1205 06:50:18.040094  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.040102  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:18.040108  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:18.040172  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:18.066285  485986 cri.go:89] found id: ""
	I1205 06:50:18.066300  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.066308  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:18.066312  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:18.066369  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:18.091446  485986 cri.go:89] found id: ""
	I1205 06:50:18.091461  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.091468  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:18.091473  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:18.091532  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:18.121218  485986 cri.go:89] found id: ""
	I1205 06:50:18.121234  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.121241  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:18.121247  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:18.121306  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:18.147004  485986 cri.go:89] found id: ""
	I1205 06:50:18.147018  485986 logs.go:282] 0 containers: []
	W1205 06:50:18.147032  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:18.147039  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:18.147050  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:18.212973  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:18.205230   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.206055   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207680   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.207996   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:18.209502   17522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:18.212983  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:18.212993  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:18.290491  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:18.290510  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:18.319970  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:18.319986  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:18.392419  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:18.392440  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:20.907875  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:20.918552  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:20.918615  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:20.948914  485986 cri.go:89] found id: ""
	I1205 06:50:20.948928  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.948935  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:20.948941  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:20.948999  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:20.974289  485986 cri.go:89] found id: ""
	I1205 06:50:20.974303  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.974310  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:20.974315  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:20.974371  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:20.999954  485986 cri.go:89] found id: ""
	I1205 06:50:20.999968  485986 logs.go:282] 0 containers: []
	W1205 06:50:20.999976  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:20.999980  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:21.000038  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:21.029788  485986 cri.go:89] found id: ""
	I1205 06:50:21.029803  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.029810  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:21.029815  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:21.029875  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:21.055163  485986 cri.go:89] found id: ""
	I1205 06:50:21.055177  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.055183  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:21.055188  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:21.055246  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:21.080955  485986 cri.go:89] found id: ""
	I1205 06:50:21.080969  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.080977  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:21.080982  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:21.081052  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:21.108615  485986 cri.go:89] found id: ""
	I1205 06:50:21.108629  485986 logs.go:282] 0 containers: []
	W1205 06:50:21.108637  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:21.108644  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:21.108655  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:21.173790  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:21.173811  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:21.188952  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:21.188969  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:21.253459  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:21.245103   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.245717   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.247496   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.248173   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:21.249826   17635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:21.253469  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:21.253480  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:21.337063  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:21.337084  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:23.866768  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:23.877363  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:23.877430  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:23.903790  485986 cri.go:89] found id: ""
	I1205 06:50:23.903807  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.903814  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:23.903819  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:23.903880  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:23.933319  485986 cri.go:89] found id: ""
	I1205 06:50:23.933333  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.933341  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:23.933346  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:23.933403  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:23.959901  485986 cri.go:89] found id: ""
	I1205 06:50:23.959914  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.959922  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:23.959927  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:23.959987  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:23.986070  485986 cri.go:89] found id: ""
	I1205 06:50:23.986083  485986 logs.go:282] 0 containers: []
	W1205 06:50:23.986090  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:23.986096  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:23.986154  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:24.014309  485986 cri.go:89] found id: ""
	I1205 06:50:24.014324  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.014331  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:24.014336  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:24.014422  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:24.040569  485986 cri.go:89] found id: ""
	I1205 06:50:24.040590  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.040598  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:24.040603  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:24.040663  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:24.066648  485986 cri.go:89] found id: ""
	I1205 06:50:24.066661  485986 logs.go:282] 0 containers: []
	W1205 06:50:24.066669  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:24.066676  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:24.066687  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:24.145239  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:24.145259  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:24.173133  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:24.173149  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:24.238469  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:24.238489  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:24.253802  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:24.253821  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:24.341051  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:24.329593   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.330313   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332016   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.332556   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:24.337208   17757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:26.841329  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:26.852711  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:26.852792  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:26.878845  485986 cri.go:89] found id: ""
	I1205 06:50:26.878858  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.878865  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:26.878871  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:26.878926  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:26.903460  485986 cri.go:89] found id: ""
	I1205 06:50:26.903475  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.903482  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:26.903487  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:26.903543  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:26.928316  485986 cri.go:89] found id: ""
	I1205 06:50:26.928330  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.928337  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:26.928342  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:26.928401  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:26.957464  485986 cri.go:89] found id: ""
	I1205 06:50:26.957477  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.957484  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:26.957490  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:26.957547  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:26.985494  485986 cri.go:89] found id: ""
	I1205 06:50:26.985508  485986 logs.go:282] 0 containers: []
	W1205 06:50:26.985515  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:26.985520  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:26.985588  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:27.012077  485986 cri.go:89] found id: ""
	I1205 06:50:27.012092  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.012099  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:27.012105  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:27.012164  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:27.037759  485986 cri.go:89] found id: ""
	I1205 06:50:27.037772  485986 logs.go:282] 0 containers: []
	W1205 06:50:27.037779  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:27.037802  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:27.037813  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:27.068005  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:27.068022  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:27.132023  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:27.132042  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:27.147964  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:27.147981  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:27.210077  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:27.201653   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.202464   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204190   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.204761   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:27.206360   17859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:27.210087  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:27.210098  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:29.784398  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:29.794460  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:50:29.794523  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:50:29.820207  485986 cri.go:89] found id: ""
	I1205 06:50:29.820221  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.820228  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:50:29.820235  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:50:29.820301  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:50:29.845407  485986 cri.go:89] found id: ""
	I1205 06:50:29.845421  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.845429  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:50:29.845434  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:50:29.845494  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:50:29.871350  485986 cri.go:89] found id: ""
	I1205 06:50:29.871364  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.871371  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:50:29.871376  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:50:29.871434  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:50:29.896668  485986 cri.go:89] found id: ""
	I1205 06:50:29.896682  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.896689  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:50:29.896694  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:50:29.896753  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:50:29.925230  485986 cri.go:89] found id: ""
	I1205 06:50:29.925243  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.925250  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:50:29.925256  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:50:29.925320  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:50:29.950431  485986 cri.go:89] found id: ""
	I1205 06:50:29.950445  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.950453  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:50:29.950459  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:50:29.950516  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:50:29.975493  485986 cri.go:89] found id: ""
	I1205 06:50:29.975507  485986 logs.go:282] 0 containers: []
	W1205 06:50:29.975514  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:50:29.975522  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:50:29.975532  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:50:29.990544  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:50:29.990561  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:50:30.089331  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:50:30.079547   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.080925   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.082899   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.083556   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:50:30.085423   17950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:50:30.089343  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:50:30.089355  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 06:50:30.176998  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:50:30.177019  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:50:30.207325  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:50:30.207342  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:50:32.779616  485986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 06:50:32.789524  485986 kubeadm.go:602] duration metric: took 4m3.78523296s to restartPrimaryControlPlane
	W1205 06:50:32.789596  485986 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 06:50:32.789791  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:50:33.200382  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:50:33.213168  485986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 06:50:33.221236  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:50:33.221295  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:50:33.229165  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:50:33.229174  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:50:33.229226  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:50:33.236961  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:50:33.237026  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:50:33.244309  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:50:33.252201  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:50:33.252257  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:50:33.259677  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.267359  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:50:33.267427  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:50:33.275464  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:50:33.283208  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:50:33.283271  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:50:33.290746  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:50:33.405156  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:50:33.405615  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:50:33.478173  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:54:34.582933  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:54:34.582957  485986 kubeadm.go:319] 
	I1205 06:54:34.583076  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:54:34.588185  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:34.588247  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:34.588363  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:34.588446  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:34.588482  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:34.588527  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:34.588597  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:34.588649  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:34.588697  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:34.588744  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:34.588792  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:34.588836  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:34.588883  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:34.588934  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:34.589006  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:34.589099  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:34.589189  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:34.589249  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:34.592315  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:34.592403  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:34.592463  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:34.592535  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:34.592603  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:34.592668  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:34.592743  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:34.592810  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:34.592871  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:34.592953  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:34.593046  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:34.593088  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:34.593139  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:34.593190  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:34.593242  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:34.593294  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:34.593352  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:34.593406  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:34.593499  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:34.593561  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:34.596524  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:34.596625  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:34.596698  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:34.596789  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:34.596910  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:34.597004  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:34.597119  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:34.597212  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:34.597250  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:34.597382  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:34.597485  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:54:34.597547  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00128632s
	I1205 06:54:34.597550  485986 kubeadm.go:319] 
	I1205 06:54:34.597605  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:54:34.597636  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:54:34.597743  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:54:34.597746  485986 kubeadm.go:319] 
	I1205 06:54:34.597848  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:54:34.597879  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:54:34.597909  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 06:54:34.598022  485986 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00128632s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 06:54:34.598117  485986 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 06:54:34.598416  485986 kubeadm.go:319] 
	I1205 06:54:35.010606  485986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 06:54:35.026641  485986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 06:54:35.026696  485986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 06:54:35.034906  485986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 06:54:35.034914  485986 kubeadm.go:158] found existing configuration files:
	
	I1205 06:54:35.034968  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1205 06:54:35.043100  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 06:54:35.043156  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 06:54:35.050682  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1205 06:54:35.058435  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 06:54:35.058491  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 06:54:35.066352  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.075006  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 06:54:35.075083  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 06:54:35.083161  485986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1205 06:54:35.091527  485986 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 06:54:35.091591  485986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 06:54:35.099509  485986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 06:54:35.143144  485986 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 06:54:35.143194  485986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 06:54:35.214737  485986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 06:54:35.214806  485986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 06:54:35.214841  485986 kubeadm.go:319] OS: Linux
	I1205 06:54:35.214894  485986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 06:54:35.214941  485986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 06:54:35.214988  485986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 06:54:35.215036  485986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 06:54:35.215082  485986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 06:54:35.215135  485986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 06:54:35.215179  485986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 06:54:35.215227  485986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 06:54:35.215272  485986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 06:54:35.280867  485986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 06:54:35.280975  485986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 06:54:35.281065  485986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 06:54:35.290789  485986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 06:54:35.294356  485986 out.go:252]   - Generating certificates and keys ...
	I1205 06:54:35.294469  485986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 06:54:35.294532  485986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 06:54:35.294608  485986 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 06:54:35.294667  485986 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 06:54:35.294735  485986 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 06:54:35.294788  485986 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 06:54:35.294850  485986 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 06:54:35.294910  485986 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 06:54:35.294989  485986 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 06:54:35.295060  485986 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 06:54:35.295097  485986 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 06:54:35.295152  485986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 06:54:35.600230  485986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 06:54:35.819372  485986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 06:54:36.031672  485986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 06:54:36.347784  485986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 06:54:36.515743  485986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 06:54:36.516403  485986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 06:54:36.519035  485986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 06:54:36.522469  485986 out.go:252]   - Booting up control plane ...
	I1205 06:54:36.522648  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 06:54:36.522737  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 06:54:36.522811  485986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 06:54:36.538750  485986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 06:54:36.538854  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 06:54:36.547809  485986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 06:54:36.548944  485986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 06:54:36.549484  485986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 06:54:36.685042  485986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 06:54:36.685156  485986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 06:58:36.684952  485986 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233025s
	I1205 06:58:36.684982  485986 kubeadm.go:319] 
	I1205 06:58:36.685040  485986 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 06:58:36.685073  485986 kubeadm.go:319] 	- The kubelet is not running
	I1205 06:58:36.685203  485986 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 06:58:36.685213  485986 kubeadm.go:319] 
	I1205 06:58:36.685319  485986 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 06:58:36.685352  485986 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 06:58:36.685382  485986 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 06:58:36.685386  485986 kubeadm.go:319] 
	I1205 06:58:36.690024  485986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 06:58:36.690504  485986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 06:58:36.690648  485986 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 06:58:36.690898  485986 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 06:58:36.690904  485986 kubeadm.go:319] 
	I1205 06:58:36.690971  485986 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 06:58:36.691025  485986 kubeadm.go:403] duration metric: took 12m7.722207493s to StartCluster
	I1205 06:58:36.691058  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 06:58:36.691120  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 06:58:36.717503  485986 cri.go:89] found id: ""
	I1205 06:58:36.717522  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.717530  485986 logs.go:284] No container was found matching "kube-apiserver"
	I1205 06:58:36.717535  485986 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 06:58:36.717599  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 06:58:36.742068  485986 cri.go:89] found id: ""
	I1205 06:58:36.742083  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.742090  485986 logs.go:284] No container was found matching "etcd"
	I1205 06:58:36.742095  485986 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 06:58:36.742150  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 06:58:36.766426  485986 cri.go:89] found id: ""
	I1205 06:58:36.766439  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.766446  485986 logs.go:284] No container was found matching "coredns"
	I1205 06:58:36.766452  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 06:58:36.766507  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 06:58:36.791681  485986 cri.go:89] found id: ""
	I1205 06:58:36.791696  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.791703  485986 logs.go:284] No container was found matching "kube-scheduler"
	I1205 06:58:36.791707  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 06:58:36.791767  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 06:58:36.816243  485986 cri.go:89] found id: ""
	I1205 06:58:36.816257  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.816264  485986 logs.go:284] No container was found matching "kube-proxy"
	I1205 06:58:36.816269  485986 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 06:58:36.816323  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 06:58:36.841386  485986 cri.go:89] found id: ""
	I1205 06:58:36.841399  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.841406  485986 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 06:58:36.841411  485986 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 06:58:36.841467  485986 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 06:58:36.866554  485986 cri.go:89] found id: ""
	I1205 06:58:36.866568  485986 logs.go:282] 0 containers: []
	W1205 06:58:36.866575  485986 logs.go:284] No container was found matching "kindnet"
	I1205 06:58:36.866584  485986 logs.go:123] Gathering logs for container status ...
	I1205 06:58:36.866594  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 06:58:36.900565  485986 logs.go:123] Gathering logs for kubelet ...
	I1205 06:58:36.900582  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 06:58:36.968215  485986 logs.go:123] Gathering logs for dmesg ...
	I1205 06:58:36.968234  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 06:58:36.983291  485986 logs.go:123] Gathering logs for describe nodes ...
	I1205 06:58:36.983307  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 06:58:37.054622  485986 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1205 06:58:37.047057   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.047417   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.048898   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.049436   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 06:58:37.051017   21752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 06:58:37.054632  485986 logs.go:123] Gathering logs for CRI-O ...
	I1205 06:58:37.054644  485986 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1205 06:58:37.134983  485986 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 06:58:37.135045  485986 out.go:285] * 
	W1205 06:58:37.135160  485986 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.135223  485986 out.go:285] * 
	W1205 06:58:37.137432  485986 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 06:58:37.142766  485986 out.go:203] 
	W1205 06:58:37.146311  485986 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233025s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 06:58:37.146363  485986 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 06:58:37.146497  485986 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 06:58:37.150301  485986 out.go:203] 
	
	
	==> CRI-O <==
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571372953Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571407981Z" level=info msg="Starting seccomp notifier watcher"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571472524Z" level=info msg="Create NRI interface"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571571528Z" level=info msg="built-in NRI default validator is disabled"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571581186Z" level=info msg="runtime interface created"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571594224Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571600657Z" level=info msg="runtime interface starting up..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571606548Z" level=info msg="starting plugins..."
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571619709Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 05 06:46:27 functional-787602 crio[10584]: time="2025-12-05T06:46:27.571689602Z" level=info msg="No systemd watchdog enabled"
	Dec 05 06:46:27 functional-787602 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.481366601Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1e822775-5cef-40d3-9686-eee6d086f1b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482224852Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e1ef1844-3877-40e0-84c2-d1c873b40d24 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.482740149Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=56b0a0d4-9f66-4348-9e04-1e53dd2684db name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483228025Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=beb5cc41-ecba-44e2-8431-8eb7caf9e6f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.483764967Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d6fbbe20-116f-42f6-8365-a643bfd6a022 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484325426Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=cc465c27-997c-4720-add0-d2aaefef1742 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:50:33 functional-787602 crio[10584]: time="2025-12-05T06:50:33.484777542Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5847487f-12af-4b83-83de-0b1cf4bc7dd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.284218578Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=9fcc6ad9-fc72-42e2-9eb3-af609b8c0fda name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285002572Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0a9c3300-2647-489a-a8c7-299acd2c2ff4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285494328Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ef012813-7294-42de-84e3-c56b0aecceed name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.285987553Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=97a76923-ddd0-413b-afdb-1a86b6e1781b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.286464253Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=72332317-e652-4a97-9d17-3ba7818fe38f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.28695984Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=443ec697-27e1-4420-9454-8afdb0ee65b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 06:54:35 functional-787602 crio[10584]: time="2025-12-05T06:54:35.287383469Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7f6d73c6-60bf-4743-9f1c-60ae6c282918 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:00:34.723269   23197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:34.724093   23197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:34.725678   23197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:34.726082   23197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:34.727636   23197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:00:34 up  3:42,  0 user,  load average: 0.22, 0.23, 0.34
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:00:32 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:32 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 794.
	Dec 05 07:00:32 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:32 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:32 functional-787602 kubelet[23088]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:32 functional-787602 kubelet[23088]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:32 functional-787602 kubelet[23088]: E1205 07:00:32.819912   23088 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:32 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:32 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:33 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 795.
	Dec 05 07:00:33 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:33 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:33 functional-787602 kubelet[23094]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:33 functional-787602 kubelet[23094]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:33 functional-787602 kubelet[23094]: E1205 07:00:33.592429   23094 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:33 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:33 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:34 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 796.
	Dec 05 07:00:34 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:34 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:34 functional-787602 kubelet[23114]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:34 functional-787602 kubelet[23114]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:34 functional-787602 kubelet[23114]: E1205 07:00:34.336291   23114 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:34 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:34 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (344.37996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1205 06:58:45.391611  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:58:54.602584  444147 retry.go:31] will retry after 4.212973944s: Temporary Error: Get "http://10.104.254.53": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:59:08.816006  444147 retry.go:31] will retry after 6.406073851s: Temporary Error: Get "http://10.104.254.53": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:59:25.222473  444147 retry.go:31] will retry after 6.217197573s: Temporary Error: Get "http://10.104.254.53": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 06:59:41.441778  444147 retry.go:31] will retry after 10.351834089s: Temporary Error: Get "http://10.104.254.53": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1205 07:00:01.795236  444147 retry.go:31] will retry after 21.163333356s: Temporary Error: Get "http://10.104.254.53": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (322.122731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (316.575772ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-787602 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image          │ functional-787602 image save --daemon kicbase/echo-server:functional-787602 --alsologtostderr                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh            │ functional-787602 ssh sudo cat /etc/ssl/certs/444147.pem                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh            │ functional-787602 ssh sudo cat /usr/share/ca-certificates/444147.pem                                                                                         │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh            │ functional-787602 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh            │ functional-787602 ssh sudo cat /etc/ssl/certs/4441472.pem                                                                                                    │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh            │ functional-787602 ssh sudo cat /usr/share/ca-certificates/4441472.pem                                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh            │ functional-787602 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:01 UTC │
	│ ssh            │ functional-787602 ssh sudo cat /etc/test/nested/copy/444147/hosts                                                                                            │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ cp             │ functional-787602 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ ssh            │ functional-787602 ssh -n functional-787602 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ cp             │ functional-787602 cp functional-787602:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1400708700/001/cp-test.txt │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ ssh            │ functional-787602 ssh -n functional-787602 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ cp             │ functional-787602 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ ssh            │ functional-787602 ssh -n functional-787602 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ image          │ functional-787602 image ls --format short --alsologtostderr                                                                                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ image          │ functional-787602 image ls --format yaml --alsologtostderr                                                                                                   │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ ssh            │ functional-787602 ssh pgrep buildkitd                                                                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │                     │
	│ image          │ functional-787602 image build -t localhost/my-image:functional-787602 testdata/build --alsologtostderr                                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ image          │ functional-787602 image ls                                                                                                                                   │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ image          │ functional-787602 image ls --format json --alsologtostderr                                                                                                   │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ image          │ functional-787602 image ls --format table --alsologtostderr                                                                                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ update-context │ functional-787602 update-context --alsologtostderr -v=2                                                                                                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ update-context │ functional-787602 update-context --alsologtostderr -v=2                                                                                                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	│ update-context │ functional-787602 update-context --alsologtostderr -v=2                                                                                                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:01 UTC │ 05 Dec 25 07:01 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:00:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:00:49.699416  502985 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:49.699614  502985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.699641  502985 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:49.699659  502985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.699940  502985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:00:49.700305  502985 out.go:368] Setting JSON to false
	I1205 07:00:49.701230  502985 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13377,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:00:49.701328  502985 start.go:143] virtualization:  
	I1205 07:00:49.704521  502985 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:00:49.707554  502985 notify.go:221] Checking for updates...
	I1205 07:00:49.708410  502985 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:00:49.711339  502985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:00:49.714116  502985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:00:49.717016  502985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:00:49.719832  502985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:00:49.722711  502985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:00:49.726030  502985 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:00:49.726712  502985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:00:49.759422  502985 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:00:49.759531  502985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.816255  502985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.807258963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.816364  502985 docker.go:319] overlay module found
	I1205 07:00:49.819519  502985 out.go:179] * Using the docker driver based on existing profile
	I1205 07:00:49.822352  502985 start.go:309] selected driver: docker
	I1205 07:00:49.822400  502985 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.822507  502985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:00:49.822624  502985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.878784  502985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.869881278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.879217  502985 cni.go:84] Creating CNI manager for ""
	I1205 07:00:49.879293  502985 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:00:49.879338  502985 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.882317  502985 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.345531529Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=e0e260b3-c6d3-422c-a8d8-fea710337aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.369982492Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=fdd9072c-c608-452e-9dcf-39515a101a39 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.370141362Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=fdd9072c-c608-452e-9dcf-39515a101a39 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.370183742Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=fdd9072c-c608-452e-9dcf-39515a101a39 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.212649583Z" level=info msg="Checking image status: kicbase/echo-server:functional-787602" id=87e80dbb-9863-4c85-85ab-a0d461850e9c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.23617424Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-787602" id=e741be7a-f775-41a4-8f5e-22cf12b8ab5a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.236309077Z" level=info msg="Image docker.io/kicbase/echo-server:functional-787602 not found" id=e741be7a-f775-41a4-8f5e-22cf12b8ab5a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.236347888Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=e741be7a-f775-41a4-8f5e-22cf12b8ab5a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.260395473Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=20e658bf-7482-4078-aedf-33fb9c1e43cc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.260528513Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=20e658bf-7482-4078-aedf-33fb9c1e43cc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.260565001Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=20e658bf-7482-4078-aedf-33fb9c1e43cc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.355243502Z" level=info msg="Checking image status: kicbase/echo-server:functional-787602" id=98c14e92-c089-4988-83f7-1c2a3b6b1272 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.379738863Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-787602" id=d891a901-c4e6-4d5e-b2c7-770f428f9f8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.379911649Z" level=info msg="Image docker.io/kicbase/echo-server:functional-787602 not found" id=d891a901-c4e6-4d5e-b2c7-770f428f9f8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.379973943Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=d891a901-c4e6-4d5e-b2c7-770f428f9f8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.403869862Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=64decca1-5eea-4c13-8b80-8430ca0dcc62 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.404001474Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=64decca1-5eea-4c13-8b80-8430ca0dcc62 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.404036511Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=64decca1-5eea-4c13-8b80-8430ca0dcc62 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.186122404Z" level=info msg="Checking image status: kicbase/echo-server:functional-787602" id=67ecbe1b-7abd-4c2f-9f71-d75902b4ca08 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.210947386Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-787602" id=3a9dcb2b-2e62-42a4-be48-2e2a6dbc9704 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.211141875Z" level=info msg="Image docker.io/kicbase/echo-server:functional-787602 not found" id=3a9dcb2b-2e62-42a4-be48-2e2a6dbc9704 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.211189145Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=3a9dcb2b-2e62-42a4-be48-2e2a6dbc9704 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.236160139Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=811c5a2f-d9cf-4032-a88e-353b260fd330 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.236315407Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=811c5a2f-d9cf-4032-a88e-353b260fd330 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.236362923Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=811c5a2f-d9cf-4032-a88e-353b260fd330 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:02:45.667267   26011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:02:45.667787   26011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:02:45.669257   26011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:02:45.669587   26011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:02:45.671051   26011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:02:45 up  3:44,  0 user,  load average: 0.45, 0.41, 0.40
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:02:43 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:02:44 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 969.
	Dec 05 07:02:44 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:02:44 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:02:44 functional-787602 kubelet[25884]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:02:44 functional-787602 kubelet[25884]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:02:44 functional-787602 kubelet[25884]: E1205 07:02:44.076681   25884 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:02:44 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:02:44 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:02:44 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 970.
	Dec 05 07:02:44 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:02:44 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:02:44 functional-787602 kubelet[25907]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:02:44 functional-787602 kubelet[25907]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:02:44 functional-787602 kubelet[25907]: E1205 07:02:44.830552   25907 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:02:44 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:02:44 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:02:45 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 971.
	Dec 05 07:02:45 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:02:45 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:02:45 functional-787602 kubelet[25993]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:02:45 functional-787602 kubelet[25993]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:02:45 functional-787602 kubelet[25993]: E1205 07:02:45.581307   25993 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:02:45 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:02:45 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (318.928122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-787602 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-787602 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (63.089643ms)

                                                
                                                
** stderr ** 
	E1205 07:00:57.105831  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.107389  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.108772  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.110130  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.111533  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-787602 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1205 07:00:57.105831  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.107389  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.108772  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.110130  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.111533  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1205 07:00:57.105831  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.107389  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.108772  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.110130  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.111533  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1205 07:00:57.105831  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.107389  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.108772  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.110130  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.111533  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1205 07:00:57.105831  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.107389  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.108772  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.110130  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.111533  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1205 07:00:57.105831  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.107389  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.108772  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.110130  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:57.111533  504166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787602
helpers_test.go:243: (dbg) docker inspect functional-787602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	        "Created": "2025-12-05T06:31:30.839014939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T06:31:30.905614638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/hosts",
	        "LogPath": "/var/lib/docker/containers/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0/973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0-json.log",
	        "Name": "/functional-787602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "973942ab29ad7665e405531456d8a60c51f7f9019cf895d394bee6e51a42d8d0",
	                "LowerDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f9b9f7cd9276de1a87c9bc36e05ce3dade6e27e5ec4904b529ef4b431d4940a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-787602",
	                "Source": "/var/lib/docker/volumes/functional-787602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787602",
	                "name.minikube.sigs.k8s.io": "functional-787602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b68d9c6c608ee7200ea42b2ad855ac665c60abc9361eb5e104629180723a9c05",
	            "SandboxKey": "/var/run/docker/netns/b68d9c6c608e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:ef:19:c1:07:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b71fa7d523dfe0fd0273815c7024918a81af47b776c4461c309918837388a92",
	                    "EndpointID": "39721ac9291e1735a1c54513bea37967015651a21f17c4a2797623c90f46b050",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787602",
	                        "973942ab29ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-787602 -n functional-787602: exit status 2 (317.695942ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount1 --alsologtostderr -v=1                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount1                                                                                                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount3 --alsologtostderr -v=1                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ mount     │ -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount2 --alsologtostderr -v=1                      │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh findmnt -T /mount2                                                                                                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh findmnt -T /mount3                                                                                                                  │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ mount     │ -p functional-787602 --kill=true                                                                                                                          │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ start     │ -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ start     │ -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ start     │ -p functional-787602 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-787602 --alsologtostderr -v=1                                                                                            │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ ssh       │ functional-787602 ssh sudo systemctl is-active docker                                                                                                     │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ ssh       │ functional-787602 ssh sudo systemctl is-active containerd                                                                                                 │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │                     │
	│ image     │ functional-787602 image load --daemon kicbase/echo-server:functional-787602 --alsologtostderr                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image ls                                                                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image load --daemon kicbase/echo-server:functional-787602 --alsologtostderr                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image ls                                                                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image load --daemon kicbase/echo-server:functional-787602 --alsologtostderr                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image ls                                                                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image save kicbase/echo-server:functional-787602 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image rm kicbase/echo-server:functional-787602 --alsologtostderr                                                                        │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image ls                                                                                                                                │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	│ image     │ functional-787602 image save --daemon kicbase/echo-server:functional-787602 --alsologtostderr                                                             │ functional-787602 │ jenkins │ v1.37.0 │ 05 Dec 25 07:00 UTC │ 05 Dec 25 07:00 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:00:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:00:49.699416  502985 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:49.699614  502985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.699641  502985 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:49.699659  502985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.699940  502985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:00:49.700305  502985 out.go:368] Setting JSON to false
	I1205 07:00:49.701230  502985 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13377,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:00:49.701328  502985 start.go:143] virtualization:  
	I1205 07:00:49.704521  502985 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:00:49.707554  502985 notify.go:221] Checking for updates...
	I1205 07:00:49.708410  502985 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:00:49.711339  502985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:00:49.714116  502985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:00:49.717016  502985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:00:49.719832  502985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:00:49.722711  502985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:00:49.726030  502985 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:00:49.726712  502985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:00:49.759422  502985 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:00:49.759531  502985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.816255  502985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.807258963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.816364  502985 docker.go:319] overlay module found
	I1205 07:00:49.819519  502985 out.go:179] * Using the docker driver based on existing profile
	I1205 07:00:49.822352  502985 start.go:309] selected driver: docker
	I1205 07:00:49.822400  502985 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.822507  502985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:00:49.822624  502985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.878784  502985 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.869881278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.879217  502985 cni.go:84] Creating CNI manager for ""
	I1205 07:00:49.879293  502985 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:00:49.879338  502985 start.go:353] cluster config:
	{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.882317  502985 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.345531529Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=e0e260b3-c6d3-422c-a8d8-fea710337aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.369982492Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=fdd9072c-c608-452e-9dcf-39515a101a39 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.370141362Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=fdd9072c-c608-452e-9dcf-39515a101a39 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:53 functional-787602 crio[10584]: time="2025-12-05T07:00:53.370183742Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=fdd9072c-c608-452e-9dcf-39515a101a39 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.212649583Z" level=info msg="Checking image status: kicbase/echo-server:functional-787602" id=87e80dbb-9863-4c85-85ab-a0d461850e9c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.23617424Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-787602" id=e741be7a-f775-41a4-8f5e-22cf12b8ab5a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.236309077Z" level=info msg="Image docker.io/kicbase/echo-server:functional-787602 not found" id=e741be7a-f775-41a4-8f5e-22cf12b8ab5a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.236347888Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=e741be7a-f775-41a4-8f5e-22cf12b8ab5a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.260395473Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=20e658bf-7482-4078-aedf-33fb9c1e43cc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.260528513Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=20e658bf-7482-4078-aedf-33fb9c1e43cc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:54 functional-787602 crio[10584]: time="2025-12-05T07:00:54.260565001Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=20e658bf-7482-4078-aedf-33fb9c1e43cc name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.355243502Z" level=info msg="Checking image status: kicbase/echo-server:functional-787602" id=98c14e92-c089-4988-83f7-1c2a3b6b1272 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.379738863Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-787602" id=d891a901-c4e6-4d5e-b2c7-770f428f9f8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.379911649Z" level=info msg="Image docker.io/kicbase/echo-server:functional-787602 not found" id=d891a901-c4e6-4d5e-b2c7-770f428f9f8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.379973943Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=d891a901-c4e6-4d5e-b2c7-770f428f9f8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.403869862Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=64decca1-5eea-4c13-8b80-8430ca0dcc62 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.404001474Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=64decca1-5eea-4c13-8b80-8430ca0dcc62 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:55 functional-787602 crio[10584]: time="2025-12-05T07:00:55.404036511Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=64decca1-5eea-4c13-8b80-8430ca0dcc62 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.186122404Z" level=info msg="Checking image status: kicbase/echo-server:functional-787602" id=67ecbe1b-7abd-4c2f-9f71-d75902b4ca08 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.210947386Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-787602" id=3a9dcb2b-2e62-42a4-be48-2e2a6dbc9704 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.211141875Z" level=info msg="Image docker.io/kicbase/echo-server:functional-787602 not found" id=3a9dcb2b-2e62-42a4-be48-2e2a6dbc9704 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.211189145Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-787602 found" id=3a9dcb2b-2e62-42a4-be48-2e2a6dbc9704 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.236160139Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-787602" id=811c5a2f-d9cf-4032-a88e-353b260fd330 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.236315407Z" level=info msg="Image localhost/kicbase/echo-server:functional-787602 not found" id=811c5a2f-d9cf-4032-a88e-353b260fd330 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:00:56 functional-787602 crio[10584]: time="2025-12-05T07:00:56.236362923Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-787602 found" id=811c5a2f-d9cf-4032-a88e-353b260fd330 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1205 07:00:58.078469   24552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:58.079029   24552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:58.080702   24552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:58.081152   24552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1205 07:00:58.082896   24552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 03:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034812] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.761688] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[Dec 5 03:18] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 5 04:36] hrtimer: interrupt took 35373468 ns
	[Dec 5 05:01] systemd-journald[219]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 5 06:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 06:11] overlayfs: idmapped layers are currently not supported
	[  +0.103226] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 06:17] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:18] overlayfs: idmapped layers are currently not supported
	[Dec 5 06:31] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:00:58 up  3:43,  0 user,  load average: 1.01, 0.41, 0.40
	Linux functional-787602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:00:55 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:56 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 05 07:00:56 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:56 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:56 functional-787602 kubelet[24346]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:56 functional-787602 kubelet[24346]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:56 functional-787602 kubelet[24346]: E1205 07:00:56.089283   24346 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:56 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:56 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:56 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 05 07:00:56 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:56 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:56 functional-787602 kubelet[24410]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:56 functional-787602 kubelet[24410]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:56 functional-787602 kubelet[24410]: E1205 07:00:56.835274   24410 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:56 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:56 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:00:57 functional-787602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 05 07:00:57 functional-787602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:57 functional-787602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:00:57 functional-787602 kubelet[24467]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:57 functional-787602 kubelet[24467]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:00:57 functional-787602 kubelet[24467]: E1205 07:00:57.592451   24467 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:00:57 functional-787602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:00:57 functional-787602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787602 -n functional-787602: exit status 2 (325.173732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-787602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1205 06:58:44.071912  498847 out.go:360] Setting OutFile to fd 1 ...
I1205 06:58:44.072097  498847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:58:44.072112  498847 out.go:374] Setting ErrFile to fd 2...
I1205 06:58:44.072117  498847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:58:44.072422  498847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 06:58:44.072718  498847 mustload.go:66] Loading cluster: functional-787602
I1205 06:58:44.073185  498847 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 06:58:44.073695  498847 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 06:58:44.104794  498847 host.go:66] Checking if "functional-787602" exists ...
I1205 06:58:44.105083  498847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 06:58:44.240754  498847 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:58:44.230938214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 06:58:44.240876  498847 api_server.go:166] Checking apiserver status ...
I1205 06:58:44.240933  498847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 06:58:44.240980  498847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 06:58:44.271130  498847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
W1205 06:58:44.380431  498847 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1205 06:58:44.383764  498847 out.go:179] * The control-plane node functional-787602 apiserver is not running: (state=Stopped)
I1205 06:58:44.386893  498847 out.go:179]   To start a cluster, run: "minikube start -p functional-787602"

                                                
                                                
stdout: * The control-plane node functional-787602 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-787602"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 498848: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-787602 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-787602 apply -f testdata/testsvc.yaml: exit status 1 (98.637359ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-787602 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (108.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.104.254.53": Temporary Error: Get "http://10.104.254.53": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-787602 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-787602 get svc nginx-svc: exit status 1 (64.53859ms)

                                                
                                                
** stderr ** 
	E1205 07:00:33.014907  499872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.016548  499872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.018108  499872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.019766  499872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1205 07:00:33.021230  499872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-787602 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (108.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-787602 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-787602 create deployment hello-node --image kicbase/echo-server: exit status 1 (58.095159ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-787602 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 service list: exit status 103 (297.570902ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-787602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-787602"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-787602 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-787602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-787602\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 service list -o json: exit status 103 (258.24317ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-787602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-787602"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-787602 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 service --namespace=default --https --url hello-node: exit status 103 (257.860552ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-787602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-787602"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-787602 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 service hello-node --url --format={{.IP}}: exit status 103 (261.549954ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-787602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-787602"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-787602 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-787602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-787602\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 service hello-node --url: exit status 103 (255.286636ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-787602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-787602"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-787602 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-787602 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-787602"
functional_test.go:1579: failed to parse "* The control-plane node functional-787602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-787602\"": parse "* The control-plane node functional-787602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-787602\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764918040672352845" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764918040672352845" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764918040672352845" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001/test-1764918040672352845
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (329.091108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 07:00:41.001728  444147 retry.go:31] will retry after 444.081748ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 07:00 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 07:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 07:00 test-1764918040672352845
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh cat /mount-9p/test-1764918040672352845
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-787602 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-787602 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (57.357689ms)

                                                
                                                
** stderr ** 
	E1205 07:00:42.399781  501437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-787602 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (286.894597ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=34465)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  5 07:00 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  5 07:00 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  5 07:00 test-1764918040672352845
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-787602 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:34465
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001:/mount-9p --alsologtostderr -v=1] stderr:
I1205 07:00:40.730167  501095 out.go:360] Setting OutFile to fd 1 ...
I1205 07:00:40.730546  501095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:00:40.730556  501095 out.go:374] Setting ErrFile to fd 2...
I1205 07:00:40.730560  501095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:00:40.730830  501095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 07:00:40.731107  501095 mustload.go:66] Loading cluster: functional-787602
I1205 07:00:40.731481  501095 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:00:40.732050  501095 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 07:00:40.753482  501095 host.go:66] Checking if "functional-787602" exists ...
I1205 07:00:40.753775  501095 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 07:00:40.851159  501095 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-05 07:00:40.841719761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 07:00:40.851321  501095 cli_runner.go:164] Run: docker network inspect functional-787602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 07:00:40.874182  501095 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001 into VM as /mount-9p ...
I1205 07:00:40.877176  501095 out.go:179]   - Mount type:   9p
I1205 07:00:40.880008  501095 out.go:179]   - User ID:      docker
I1205 07:00:40.882829  501095 out.go:179]   - Group ID:     docker
I1205 07:00:40.885805  501095 out.go:179]   - Version:      9p2000.L
I1205 07:00:40.888510  501095 out.go:179]   - Message Size: 262144
I1205 07:00:40.891593  501095 out.go:179]   - Options:      map[]
I1205 07:00:40.894448  501095 out.go:179]   - Bind Address: 192.168.49.1:34465
I1205 07:00:40.897214  501095 out.go:179] * Userspace file server: 
I1205 07:00:40.897548  501095 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1205 07:00:40.897673  501095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 07:00:40.918858  501095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
I1205 07:00:41.028164  501095 mount.go:180] unmount for /mount-9p ran successfully
I1205 07:00:41.028218  501095 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1205 07:00:41.037112  501095 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=34465,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1205 07:00:41.048135  501095 main.go:127] stdlog: ufs.go:141 connected
I1205 07:00:41.048459  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tversion tag 65535 msize 262144 version '9P2000.L'
I1205 07:00:41.048503  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rversion tag 65535 msize 262144 version '9P2000'
I1205 07:00:41.048709  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1205 07:00:41.048768  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rattach tag 0 aqid (15c3b05 ed50185b 'd')
I1205 07:00:41.049539  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 0
I1205 07:00:41.049593  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3b05 ed50185b 'd') m d775 at 0 mt 1764918040 l 4096 t 0 d 0 ext )
I1205 07:00:41.051375  501095 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/.mount-process: {Name:mke65f40ac81dfca5d842592b0b3432e15af51f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 07:00:41.051566  501095 mount.go:105] mount successful: ""
I1205 07:00:41.054826  501095 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2135674648/001 to /mount-9p
I1205 07:00:41.057693  501095 out.go:203] 
I1205 07:00:41.060529  501095 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1205 07:00:42.027253  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 0
I1205 07:00:42.027333  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3b05 ed50185b 'd') m d775 at 0 mt 1764918040 l 4096 t 0 d 0 ext )
I1205 07:00:42.027729  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 1 
I1205 07:00:42.027774  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 
I1205 07:00:42.027917  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Topen tag 0 fid 1 mode 0
I1205 07:00:42.028007  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Ropen tag 0 qid (15c3b05 ed50185b 'd') iounit 0
I1205 07:00:42.028147  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 0
I1205 07:00:42.028185  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3b05 ed50185b 'd') m d775 at 0 mt 1764918040 l 4096 t 0 d 0 ext )
I1205 07:00:42.028330  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 0 count 262120
I1205 07:00:42.028439  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 258
I1205 07:00:42.028594  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 258 count 261862
I1205 07:00:42.028626  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.028750  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 258 count 262120
I1205 07:00:42.028775  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.028911  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1205 07:00:42.028948  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 (15c3b06 ed50185b '') 
I1205 07:00:42.029056  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.029100  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3b06 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.029232  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.029277  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3b06 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.029400  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 2
I1205 07:00:42.029432  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.029567  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 2 0:'test-1764918040672352845' 
I1205 07:00:42.029601  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 (15c3b08 ed50185b '') 
I1205 07:00:42.029711  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.029745  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('test-1764918040672352845' 'jenkins' 'jenkins' '' q (15c3b08 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.029894  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.029930  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('test-1764918040672352845' 'jenkins' 'jenkins' '' q (15c3b08 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.030046  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 2
I1205 07:00:42.030089  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.031195  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1205 07:00:42.031281  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 (15c3b07 ed50185b '') 
I1205 07:00:42.031431  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.031474  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3b07 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.031605  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.031641  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3b07 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.031749  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 2
I1205 07:00:42.031769  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.031882  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 258 count 262120
I1205 07:00:42.031908  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.032031  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 1
I1205 07:00:42.032060  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.331067  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 1 0:'test-1764918040672352845' 
I1205 07:00:42.331142  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 (15c3b08 ed50185b '') 
I1205 07:00:42.331344  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 1
I1205 07:00:42.331389  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('test-1764918040672352845' 'jenkins' 'jenkins' '' q (15c3b08 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.331532  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 1 newfid 2 
I1205 07:00:42.331560  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 
I1205 07:00:42.331691  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Topen tag 0 fid 2 mode 0
I1205 07:00:42.331757  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Ropen tag 0 qid (15c3b08 ed50185b '') iounit 0
I1205 07:00:42.331890  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 1
I1205 07:00:42.331944  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('test-1764918040672352845' 'jenkins' 'jenkins' '' q (15c3b08 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.332119  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 2 offset 0 count 262120
I1205 07:00:42.332163  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 24
I1205 07:00:42.332281  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 2 offset 24 count 262120
I1205 07:00:42.332309  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.332468  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 2 offset 24 count 262120
I1205 07:00:42.332519  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.332823  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 2
I1205 07:00:42.332867  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.333045  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 1
I1205 07:00:42.333080  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.680941  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 0
I1205 07:00:42.681014  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3b05 ed50185b 'd') m d775 at 0 mt 1764918040 l 4096 t 0 d 0 ext )
I1205 07:00:42.681363  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 1 
I1205 07:00:42.681398  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 
I1205 07:00:42.681535  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Topen tag 0 fid 1 mode 0
I1205 07:00:42.681589  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Ropen tag 0 qid (15c3b05 ed50185b 'd') iounit 0
I1205 07:00:42.681712  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 0
I1205 07:00:42.681747  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3b05 ed50185b 'd') m d775 at 0 mt 1764918040 l 4096 t 0 d 0 ext )
I1205 07:00:42.681912  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 0 count 262120
I1205 07:00:42.682012  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 258
I1205 07:00:42.682138  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 258 count 261862
I1205 07:00:42.682173  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.682307  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 258 count 262120
I1205 07:00:42.682332  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.682475  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1205 07:00:42.682507  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 (15c3b06 ed50185b '') 
I1205 07:00:42.682641  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.682682  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3b06 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.682800  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.682829  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3b06 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.682965  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 2
I1205 07:00:42.682989  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.683113  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 2 0:'test-1764918040672352845' 
I1205 07:00:42.683147  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 (15c3b08 ed50185b '') 
I1205 07:00:42.683268  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.683296  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('test-1764918040672352845' 'jenkins' 'jenkins' '' q (15c3b08 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.683417  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.683450  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('test-1764918040672352845' 'jenkins' 'jenkins' '' q (15c3b08 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.683577  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 2
I1205 07:00:42.683601  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.683722  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1205 07:00:42.683755  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rwalk tag 0 (15c3b07 ed50185b '') 
I1205 07:00:42.683874  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.683909  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3b07 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.684038  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tstat tag 0 fid 2
I1205 07:00:42.684068  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3b07 ed50185b '') m 644 at 0 mt 1764918040 l 24 t 0 d 0 ext )
I1205 07:00:42.684189  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 2
I1205 07:00:42.684212  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.684340  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tread tag 0 fid 1 offset 258 count 262120
I1205 07:00:42.684379  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rread tag 0 count 0
I1205 07:00:42.684514  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 1
I1205 07:00:42.684547  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.685709  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1205 07:00:42.685777  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rerror tag 0 ename 'file not found' ecode 0
I1205 07:00:42.947649  501095 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40908 Tclunk tag 0 fid 0
I1205 07:00:42.947705  501095 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40908 Rclunk tag 0
I1205 07:00:42.948726  501095 main.go:127] stdlog: ufs.go:147 disconnected
I1205 07:00:42.970216  501095 out.go:179] * Unmounting /mount-9p ...
I1205 07:00:42.973174  501095 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1205 07:00:42.980218  501095 mount.go:180] unmount for /mount-9p ran successfully
I1205 07:00:42.980324  501095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/.mount-process: {Name:mke65f40ac81dfca5d842592b0b3432e15af51f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 07:00:42.983531  501095 out.go:203] 
W1205 07:00:42.986363  501095 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1205 07:00:42.989333  501095 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image load --daemon kicbase/echo-server:functional-787602 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-787602" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image load --daemon kicbase/echo-server:functional-787602 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-787602" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-787602
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image load --daemon kicbase/echo-server:functional-787602 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-787602" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image save kicbase/echo-server:functional-787602 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1205 07:00:56.531069  504061 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:56.531294  504061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:56.531325  504061 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:56.531345  504061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:56.532050  504061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:00:56.532755  504061 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:00:56.532923  504061 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:00:56.533509  504061 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
	I1205 07:00:56.550536  504061 ssh_runner.go:195] Run: systemctl --version
	I1205 07:00:56.550593  504061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
	I1205 07:00:56.567535  504061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
	I1205 07:00:56.669154  504061 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1205 07:00:56.669223  504061 cache_images.go:255] Failed to load cached images for "functional-787602": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1205 07:00:56.669245  504061 cache_images.go:267] failed pushing to: functional-787602

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-787602
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image save --daemon kicbase/echo-server:functional-787602 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-787602
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-787602: exit status 1 (23.177108ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-787602

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-787602

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-889067 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-889067 --output=json --user=testUser: exit status 80 (1.70302427s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f6ea5a8a-168f-4252-ad80-76056ed592bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-889067 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"20ab6133-83a0-492d-a5f7-6e3ab18afbf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-05T07:15:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"28d18c03-a9e2-4562-a38d-b94c70bfccb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-889067 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-889067 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-889067 --output=json --user=testUser: exit status 80 (1.601132894s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"50abe711-f7da-4cc1-8542-d8c78f0ae191","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-889067 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"70274be9-d29a-4c4b-ab23-e317bcaf374d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-05T07:15:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"ef5a66c7-eff7-4c33-bff1-831a9664a82c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-889067 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (804.43s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 07:33:44.276739  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:33:45.394200  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.093877763s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-421996
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-421996: (1.777801948s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-421996 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-421996 status --format={{.Host}}: exit status 7 (96.709759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m34.633868456s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-421996] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-421996" primary control-plane node in "kubernetes-upgrade-421996" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:33:48.470784  623133 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:33:48.471010  623133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:33:48.471037  623133 out.go:374] Setting ErrFile to fd 2...
	I1205 07:33:48.471060  623133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:33:48.471335  623133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:33:48.471738  623133 out.go:368] Setting JSON to false
	I1205 07:33:48.472609  623133 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15356,"bootTime":1764904673,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:33:48.472702  623133 start.go:143] virtualization:  
	I1205 07:33:48.476962  623133 out.go:179] * [kubernetes-upgrade-421996] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:33:48.479844  623133 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:33:48.479926  623133 notify.go:221] Checking for updates...
	I1205 07:33:48.485643  623133 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:33:48.488587  623133 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:33:48.491506  623133 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:33:48.497004  623133 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:33:48.502533  623133 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:33:48.506032  623133 config.go:182] Loaded profile config "kubernetes-upgrade-421996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1205 07:33:48.506666  623133 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:33:48.533969  623133 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:33:48.534090  623133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:33:48.593153  623133 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-05 07:33:48.583142595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:33:48.593267  623133 docker.go:319] overlay module found
	I1205 07:33:48.596700  623133 out.go:179] * Using the docker driver based on existing profile
	I1205 07:33:48.599582  623133 start.go:309] selected driver: docker
	I1205 07:33:48.599610  623133 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-421996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-421996 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:33:48.599717  623133 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:33:48.600424  623133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:33:48.654010  623133 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-05 07:33:48.644083693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:33:48.654352  623133 cni.go:84] Creating CNI manager for ""
	I1205 07:33:48.654481  623133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:33:48.654526  623133 start.go:353] cluster config:
	{Name:kubernetes-upgrade-421996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-421996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:33:48.657703  623133 out.go:179] * Starting "kubernetes-upgrade-421996" primary control-plane node in "kubernetes-upgrade-421996" cluster
	I1205 07:33:48.660521  623133 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:33:48.663516  623133 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:33:48.666569  623133 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:33:48.666656  623133 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:33:48.688276  623133 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:33:48.688296  623133 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1205 07:33:48.728268  623133 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1205 07:33:48.957415  623133 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 07:33:48.957566  623133 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/config.json ...
	I1205 07:33:48.957800  623133 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:33:48.957841  623133 start.go:360] acquireMachinesLock for kubernetes-upgrade-421996: {Name:mk5de817ea08bd1de37f798a3d4cdc7d91838675 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.957904  623133 start.go:364] duration metric: took 42.659µs to acquireMachinesLock for "kubernetes-upgrade-421996"
	I1205 07:33:48.957920  623133 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:33:48.957925  623133 fix.go:54] fixHost starting: 
	I1205 07:33:48.958216  623133 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-421996 --format={{.State.Status}}
	I1205 07:33:48.958477  623133 cache.go:107] acquiring lock: {Name:mk88b952660f9f9a3cd2b139fee120b0278d1e20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.958550  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 07:33:48.958564  623133 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.829µs
	I1205 07:33:48.958585  623133 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 07:33:48.958598  623133 cache.go:107] acquiring lock: {Name:mkedaab1cf77620d08ef2f51ca7e1d9f57f72363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.958634  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1205 07:33:48.958683  623133 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 86.869µs
	I1205 07:33:48.958695  623133 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1205 07:33:48.958707  623133 cache.go:107] acquiring lock: {Name:mka5c049e32c8e3169e4c167a0d0b15213dce995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.958742  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1205 07:33:48.958752  623133 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 47.008µs
	I1205 07:33:48.958765  623133 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1205 07:33:48.958776  623133 cache.go:107] acquiring lock: {Name:mkaf5cb322e900aa41709cc418ac159b392f9f8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.958811  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1205 07:33:48.958821  623133 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 45.539µs
	I1205 07:33:48.958831  623133 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1205 07:33:48.958841  623133 cache.go:107] acquiring lock: {Name:mk5baca4bb3050b9bd529b5a05ebd4eb73b711b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.958876  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1205 07:33:48.958886  623133 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 46.926µs
	I1205 07:33:48.958892  623133 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1205 07:33:48.958905  623133 cache.go:107] acquiring lock: {Name:mk6bd4a5d645dc97aea22009b52080340baf091d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.958937  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1205 07:33:48.958946  623133 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.727µs
	I1205 07:33:48.958951  623133 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1205 07:33:48.958960  623133 cache.go:107] acquiring lock: {Name:mk06e2bb02831ba97123bb14b873925e8358c670 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.958989  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1205 07:33:48.958998  623133 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 38.778µs
	I1205 07:33:48.959004  623133 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1205 07:33:48.959020  623133 cache.go:107] acquiring lock: {Name:mk9d47c39513d2ffe8d26acb8d5af358d2c89b10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:33:48.959047  623133 cache.go:115] /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1205 07:33:48.959057  623133 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.581µs
	I1205 07:33:48.959063  623133 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1205 07:33:48.959077  623133 cache.go:87] Successfully saved all images to host disk.
	I1205 07:33:48.978278  623133 fix.go:112] recreateIfNeeded on kubernetes-upgrade-421996: state=Stopped err=<nil>
	W1205 07:33:48.978317  623133 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:33:48.985449  623133 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-421996" ...
	I1205 07:33:48.985539  623133 cli_runner.go:164] Run: docker start kubernetes-upgrade-421996
	I1205 07:33:49.504934  623133 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-421996 --format={{.State.Status}}
	I1205 07:33:49.525225  623133 kic.go:430] container "kubernetes-upgrade-421996" state is running.
	I1205 07:33:49.525614  623133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-421996
	I1205 07:33:49.545971  623133 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/config.json ...
	I1205 07:33:49.546194  623133 machine.go:94] provisionDockerMachine start ...
	I1205 07:33:49.546264  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:49.572505  623133 main.go:143] libmachine: Using SSH client type: native
	I1205 07:33:49.573184  623133 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1205 07:33:49.573200  623133 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:33:49.573831  623133 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56450->127.0.0.1:33373: read: connection reset by peer
	I1205 07:33:52.729963  623133 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-421996
	
	I1205 07:33:52.729988  623133 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-421996"
	I1205 07:33:52.730067  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:52.751939  623133 main.go:143] libmachine: Using SSH client type: native
	I1205 07:33:52.752305  623133 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1205 07:33:52.752345  623133 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-421996 && echo "kubernetes-upgrade-421996" | sudo tee /etc/hostname
	I1205 07:33:52.929260  623133 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-421996
	
	I1205 07:33:52.929359  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:52.951657  623133 main.go:143] libmachine: Using SSH client type: native
	I1205 07:33:52.952009  623133 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1205 07:33:52.952032  623133 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-421996' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-421996/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-421996' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:33:53.114848  623133 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:33:53.114943  623133 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 07:33:53.114994  623133 ubuntu.go:190] setting up certificates
	I1205 07:33:53.115028  623133 provision.go:84] configureAuth start
	I1205 07:33:53.115120  623133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-421996
	I1205 07:33:53.136960  623133 provision.go:143] copyHostCerts
	I1205 07:33:53.137035  623133 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 07:33:53.137051  623133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 07:33:53.137122  623133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 07:33:53.137248  623133 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 07:33:53.137253  623133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 07:33:53.137278  623133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 07:33:53.137327  623133 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 07:33:53.137331  623133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 07:33:53.137354  623133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 07:33:53.137396  623133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-421996 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-421996 localhost minikube]
	I1205 07:33:53.181988  623133 provision.go:177] copyRemoteCerts
	I1205 07:33:53.182116  623133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:33:53.182176  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:53.199735  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	I1205 07:33:53.318438  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:33:53.340221  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 07:33:53.360673  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:33:53.382597  623133 provision.go:87] duration metric: took 267.524783ms to configureAuth
	I1205 07:33:53.382707  623133 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:33:53.383006  623133 config.go:182] Loaded profile config "kubernetes-upgrade-421996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:33:53.383191  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:53.409483  623133 main.go:143] libmachine: Using SSH client type: native
	I1205 07:33:53.409802  623133 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1205 07:33:53.409821  623133 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:33:53.963370  623133 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:33:53.963443  623133 machine.go:97] duration metric: took 4.417238965s to provisionDockerMachine
	I1205 07:33:53.963470  623133 start.go:293] postStartSetup for "kubernetes-upgrade-421996" (driver="docker")
	I1205 07:33:53.963495  623133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:33:53.963604  623133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:33:53.963677  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:53.987100  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	I1205 07:33:54.107284  623133 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:33:54.111385  623133 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:33:54.111411  623133 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:33:54.111421  623133 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 07:33:54.111472  623133 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 07:33:54.111549  623133 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 07:33:54.111657  623133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:33:54.119814  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:33:54.142360  623133 start.go:296] duration metric: took 178.863107ms for postStartSetup
	I1205 07:33:54.142524  623133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:33:54.142596  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:54.166336  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	I1205 07:33:54.292811  623133 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:33:54.298180  623133 fix.go:56] duration metric: took 5.340247347s for fixHost
	I1205 07:33:54.298208  623133 start.go:83] releasing machines lock for "kubernetes-upgrade-421996", held for 5.340294478s
	I1205 07:33:54.298287  623133 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-421996
	I1205 07:33:54.319930  623133 ssh_runner.go:195] Run: cat /version.json
	I1205 07:33:54.319981  623133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:33:54.320047  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:54.319986  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:54.357845  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	I1205 07:33:54.365918  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	I1205 07:33:54.591207  623133 ssh_runner.go:195] Run: systemctl --version
	I1205 07:33:54.600090  623133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:33:54.667962  623133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:33:54.673801  623133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:33:54.673876  623133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:33:54.713414  623133 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:33:54.713486  623133 start.go:496] detecting cgroup driver to use...
	I1205 07:33:54.713543  623133 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:33:54.713622  623133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:33:54.735903  623133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:33:54.754225  623133 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:33:54.754333  623133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:33:54.772301  623133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:33:54.787724  623133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:33:54.909251  623133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:33:55.036032  623133 docker.go:234] disabling docker service ...
	I1205 07:33:55.036158  623133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:33:55.054839  623133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:33:55.068677  623133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:33:55.199333  623133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:33:55.332584  623133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:33:55.346200  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:33:55.361842  623133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:33:55.361948  623133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:33:55.371413  623133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 07:33:55.371511  623133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:33:55.383948  623133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:33:55.394885  623133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:33:55.405230  623133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:33:55.413606  623133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:33:55.425466  623133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:33:55.434026  623133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:33:55.445764  623133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:33:55.453571  623133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:33:55.461561  623133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:33:55.577743  623133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:33:56.638840  623133 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.061059008s)
	I1205 07:33:56.638869  623133 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:33:56.638920  623133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:33:56.649092  623133 start.go:564] Will wait 60s for crictl version
	I1205 07:33:56.649221  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:56.655755  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:33:56.691881  623133 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:33:56.691968  623133 ssh_runner.go:195] Run: crio --version
	I1205 07:33:56.760341  623133 ssh_runner.go:195] Run: crio --version
	I1205 07:33:56.800291  623133 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1205 07:33:56.804067  623133 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-421996 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:33:56.821010  623133 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1205 07:33:56.824953  623133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:33:56.835322  623133 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-421996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-421996 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:33:56.835430  623133 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1205 07:33:56.835476  623133 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:33:56.870981  623133 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1205 07:33:56.871021  623133 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 07:33:56.871082  623133 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:33:56.871297  623133 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:33:56.871386  623133 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:33:56.871462  623133 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:33:56.871547  623133 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:33:56.871643  623133 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1205 07:33:56.871723  623133 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:33:56.871788  623133 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:33:56.875558  623133 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1205 07:33:56.878415  623133 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:33:56.880202  623133 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:33:56.880287  623133 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:33:56.880352  623133 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:33:56.880420  623133 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:33:56.880488  623133 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:33:56.884670  623133 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:33:57.245169  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:33:57.314291  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1205 07:33:57.327556  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1205 07:33:57.348342  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:33:57.408792  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:33:57.411999  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:33:57.440919  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:33:57.446935  623133 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1205 07:33:57.447014  623133 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:33:57.447075  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:57.449728  623133 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1205 07:33:57.449810  623133 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1205 07:33:57.449870  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:57.701847  623133 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1205 07:33:57.701942  623133 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:33:57.702016  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:57.701943  623133 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1205 07:33:57.702157  623133 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1205 07:33:57.702199  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:57.752348  623133 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1205 07:33:57.752391  623133 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:33:57.752437  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:57.765497  623133 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1205 07:33:57.765524  623133 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1205 07:33:57.765539  623133 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:33:57.765549  623133 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:33:57.765592  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:57.765595  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:57.765645  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:33:57.765687  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:33:57.774478  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:33:57.775040  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:33:57.802737  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:33:57.915602  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:33:57.915822  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:33:58.171171  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:33:58.171274  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:33:58.178060  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:33:58.178159  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:33:58.239875  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:33:58.239961  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:33:58.240021  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:33:58.365618  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1205 07:33:58.365700  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1205 07:33:58.400518  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1205 07:33:58.400623  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1205 07:33:58.424246  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1205 07:33:58.424317  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1205 07:33:58.424368  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1205 07:33:58.603175  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1205 07:33:58.603270  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:33:58.603327  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1205 07:33:58.603383  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:33:58.613448  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1205 07:33:58.613541  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1205 07:33:58.613598  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1205 07:33:58.613638  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:33:58.613718  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1205 07:33:58.613761  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:33:58.613808  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1205 07:33:58.613850  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:33:58.613889  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1205 07:33:58.613943  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:33:58.615037  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1205 07:33:58.615073  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1205 07:33:58.615122  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1205 07:33:58.615135  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1205 07:33:58.683815  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1205 07:33:58.683890  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1205 07:33:58.683960  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1205 07:33:58.684000  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1205 07:33:58.684062  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1205 07:33:58.684096  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1205 07:33:58.684169  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1205 07:33:58.684208  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1205 07:33:58.684271  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1205 07:33:58.684303  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	W1205 07:33:58.721310  623133 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:33:58.721395  623133 retry.go:31] will retry after 177.167397ms: ssh: rejected: connect failed (open failed)
	W1205 07:33:58.721974  623133 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1205 07:33:58.722044  623133 retry.go:31] will retry after 249.821022ms: ssh: rejected: connect failed (open failed)
	I1205 07:33:58.882709  623133 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1205 07:33:58.882844  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1205 07:33:58.882963  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:58.899661  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:58.959143  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	I1205 07:33:58.972605  623133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-421996
	I1205 07:33:58.975156  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	I1205 07:33:59.095320  623133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/kubernetes-upgrade-421996/id_rsa Username:docker}
	W1205 07:33:59.101805  623133 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 07:33:59.102005  623133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:33:59.607942  623133 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 07:33:59.607984  623133 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:33:59.608031  623133 ssh_runner.go:195] Run: which crictl
	I1205 07:33:59.608095  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1205 07:33:59.608109  623133 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:33:59.608137  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1205 07:34:02.026257  623133 ssh_runner.go:235] Completed: which crictl: (2.418202934s)
	I1205 07:34:02.026333  623133 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 07:34:02.026350  623133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.418196961s)
	I1205 07:34:02.026367  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1205 07:34:02.026465  623133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:02.026508  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1205 07:34:02.107496  623133 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 07:34:02.107602  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:34:04.676237  623133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.649705714s)
	I1205 07:34:04.676262  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1205 07:34:04.676280  623133 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:04.676330  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1205 07:34:04.676394  623133 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.568781978s)
	I1205 07:34:04.676408  623133 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 07:34:04.676422  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 07:34:06.328447  623133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.652097004s)
	I1205 07:34:06.328513  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1205 07:34:06.328549  623133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:06.328640  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1205 07:34:08.245656  623133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.91696094s)
	I1205 07:34:08.245690  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1205 07:34:08.245710  623133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:08.245773  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1205 07:34:09.858878  623133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.613076587s)
	I1205 07:34:09.858917  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1205 07:34:09.858942  623133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:09.859017  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1205 07:34:11.571308  623133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.712268135s)
	I1205 07:34:11.571341  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1205 07:34:11.571360  623133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:34:11.571419  623133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 07:34:12.290238  623133 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-441321/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 07:34:12.290270  623133 cache_images.go:125] Successfully loaded all cached images
	I1205 07:34:12.290276  623133 cache_images.go:94] duration metric: took 15.419241727s to LoadCachedImages
	I1205 07:34:12.290291  623133 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1205 07:34:12.290407  623133 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-421996 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-421996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:34:12.290491  623133 ssh_runner.go:195] Run: crio config
	I1205 07:34:12.365023  623133 cni.go:84] Creating CNI manager for ""
	I1205 07:34:12.365084  623133 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:34:12.365132  623133 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:34:12.365174  623133 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-421996 NodeName:kubernetes-upgrade-421996 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:34:12.365369  623133 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-421996"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:34:12.365477  623133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:34:12.374866  623133 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1205 07:34:12.374995  623133 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1205 07:34:12.383515  623133 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1205 07:34:12.383624  623133 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1205 07:34:12.383806  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1205 07:34:12.383808  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1205 07:34:12.383641  623133 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1205 07:34:12.384017  623133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:34:12.404879  623133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1205 07:34:12.404947  623133 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1205 07:34:12.404963  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1205 07:34:12.405001  623133 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1205 07:34:12.405010  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1205 07:34:12.438906  623133 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1205 07:34:12.438995  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1205 07:34:13.367069  623133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:34:13.377745  623133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1205 07:34:13.392889  623133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1205 07:34:13.410353  623133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1205 07:34:13.425486  623133 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:34:13.430760  623133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:34:13.444510  623133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:34:13.602976  623133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:34:13.622823  623133 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996 for IP: 192.168.76.2
	I1205 07:34:13.622846  623133 certs.go:195] generating shared ca certs ...
	I1205 07:34:13.622862  623133 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:13.623035  623133 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 07:34:13.623115  623133 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 07:34:13.623129  623133 certs.go:257] generating profile certs ...
	I1205 07:34:13.623236  623133 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/client.key
	I1205 07:34:13.623319  623133 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/apiserver.key.8b68665d
	I1205 07:34:13.623408  623133 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/proxy-client.key
	I1205 07:34:13.623566  623133 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 07:34:13.623619  623133 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 07:34:13.623634  623133 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:34:13.623663  623133 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:34:13.623703  623133 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:34:13.623743  623133 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 07:34:13.623820  623133 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:34:13.624434  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:34:13.655371  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:34:13.685741  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:34:13.721511  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:34:13.748551  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 07:34:13.772013  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 07:34:13.796418  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:34:13.822578  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 07:34:13.845718  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:34:13.871159  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 07:34:13.893413  623133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 07:34:13.914109  623133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:34:13.929574  623133 ssh_runner.go:195] Run: openssl version
	I1205 07:34:13.937389  623133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:34:13.947909  623133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:34:13.958810  623133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:34:13.965126  623133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:34:13.965214  623133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:34:14.019241  623133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:34:14.032226  623133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 07:34:14.048286  623133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 07:34:14.059504  623133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 07:34:14.066438  623133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 07:34:14.066530  623133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 07:34:14.112618  623133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:34:14.121822  623133 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 07:34:14.131676  623133 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 07:34:14.141428  623133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 07:34:14.148600  623133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 07:34:14.148695  623133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 07:34:14.195403  623133 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:34:14.204268  623133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:34:14.210478  623133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:34:14.254870  623133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:34:14.298674  623133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:34:14.341266  623133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:34:14.389842  623133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:34:14.433574  623133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:34:14.478078  623133 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-421996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-421996 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:34:14.478230  623133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:34:14.478329  623133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:34:14.533512  623133 cri.go:89] found id: ""
	I1205 07:34:14.533622  623133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:34:14.544411  623133 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:34:14.544482  623133 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:34:14.544564  623133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:34:14.567958  623133 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:34:14.568431  623133 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-421996" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:34:14.568581  623133 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-441321/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-421996" cluster setting kubeconfig missing "kubernetes-upgrade-421996" context setting]
	I1205 07:34:14.568889  623133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:34:14.570317  623133 kapi.go:59] client config for kubernetes-upgrade-421996: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/kubernetes-upgrade-421996/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 07:34:14.570968  623133 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 07:34:14.571006  623133 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 07:34:14.571058  623133 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 07:34:14.571086  623133 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 07:34:14.571109  623133 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 07:34:14.571397  623133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:34:14.597649  623133 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-05 07:33:21.471461320 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-05 07:34:13.420100125 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-421996"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1205 07:34:14.597716  623133 kubeadm.go:1161] stopping kube-system containers ...
	I1205 07:34:14.597743  623133 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 07:34:14.597832  623133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:34:14.656942  623133 cri.go:89] found id: ""
	I1205 07:34:14.657051  623133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 07:34:14.675313  623133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:34:14.685595  623133 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec  5 07:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  5 07:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  5 07:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  5 07:33 /etc/kubernetes/scheduler.conf
	
	I1205 07:34:14.685707  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:34:14.695950  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:34:14.705808  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:34:14.715583  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:34:14.715716  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:34:14.724977  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:34:14.734664  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:34:14.734779  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:34:14.744645  623133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:34:14.756100  623133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:34:14.818974  623133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:34:16.593057  623133 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.774003363s)
	I1205 07:34:16.593127  623133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:34:17.009113  623133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:34:17.133970  623133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 07:34:17.261153  623133 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:34:17.261329  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:17.761671  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:18.261564  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:18.762210  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:19.261990  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:19.762403  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:20.262157  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:20.761346  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:21.262202  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:21.761421  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:22.261451  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:22.762026  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:23.261945  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:23.762343  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:24.261382  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:24.761548  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:25.261807  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:25.762197  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:26.261397  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:26.761903  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:27.262310  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:27.761601  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:28.262394  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:28.762174  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:29.261956  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:29.761425  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:30.261471  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:30.761470  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:31.262244  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:31.761829  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:32.262360  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:32.761446  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:33.262137  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:33.762119  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:34.262027  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:34.761731  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:35.262040  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:35.761394  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:36.261395  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:36.761584  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:37.262025  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:37.762365  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:38.261464  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:38.761476  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:39.261982  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:39.761379  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:40.262275  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:40.761562  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:41.261353  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:41.761396  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:42.262412  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:42.761683  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:43.262410  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:43.761649  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:44.261389  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:44.762403  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:45.262267  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:45.761805  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:46.261498  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:46.761962  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:47.262045  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:47.761459  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:48.262308  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:48.761895  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:49.261500  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:49.762180  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:50.262337  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:50.761452  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:51.261842  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:51.762309  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:52.261470  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:52.761448  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:53.261657  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:53.762014  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:54.262062  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:54.761363  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:55.262280  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:55.761508  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:56.262113  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:56.762221  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:57.261485  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:57.762023  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:58.261484  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:58.761473  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:59.261589  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:34:59.762113  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:00.261497  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:00.761489  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:01.261453  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:01.761700  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:02.262170  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:02.762081  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:03.261452  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:03.761858  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:04.261468  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:04.762149  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:05.261513  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:05.762284  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:06.261389  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:06.761462  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:07.262354  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:07.761513  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:08.262205  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:08.761494  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:09.262106  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:09.762147  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:10.261673  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:10.762288  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:11.262042  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:11.761757  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:12.261475  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:12.761977  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:13.262464  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:13.762149  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:14.261347  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:14.762408  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:15.261610  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:15.762280  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:16.261509  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:16.762149  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:17.261390  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:17.261504  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:17.325132  623133 cri.go:89] found id: ""
	I1205 07:35:17.325158  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.325167  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:17.325173  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:17.325236  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:17.355285  623133 cri.go:89] found id: ""
	I1205 07:35:17.355310  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.355330  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:17.355336  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:17.355395  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:17.393007  623133 cri.go:89] found id: ""
	I1205 07:35:17.393032  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.393041  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:17.393047  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:17.393107  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:17.420322  623133 cri.go:89] found id: ""
	I1205 07:35:17.420344  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.420353  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:17.420359  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:17.420428  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:17.450928  623133 cri.go:89] found id: ""
	I1205 07:35:17.450954  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.450963  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:17.450969  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:17.451029  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:17.476497  623133 cri.go:89] found id: ""
	I1205 07:35:17.476530  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.476540  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:17.476547  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:17.476605  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:17.502298  623133 cri.go:89] found id: ""
	I1205 07:35:17.502319  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.502328  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:17.502334  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:17.502415  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:17.537513  623133 cri.go:89] found id: ""
	I1205 07:35:17.537539  623133 logs.go:282] 0 containers: []
	W1205 07:35:17.537548  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:17.537557  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:17.537568  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:17.608817  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:17.608854  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:17.626990  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:17.627022  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:17.695075  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:17.695103  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:17.695117  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:17.740013  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:17.740054  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:20.277199  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:20.293444  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:20.293511  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:20.330798  623133 cri.go:89] found id: ""
	I1205 07:35:20.330819  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.330827  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:20.330834  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:20.330894  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:20.361908  623133 cri.go:89] found id: ""
	I1205 07:35:20.361982  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.361993  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:20.361999  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:20.362089  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:20.389491  623133 cri.go:89] found id: ""
	I1205 07:35:20.389517  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.389525  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:20.389532  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:20.389591  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:20.415935  623133 cri.go:89] found id: ""
	I1205 07:35:20.415964  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.415974  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:20.415981  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:20.416049  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:20.444766  623133 cri.go:89] found id: ""
	I1205 07:35:20.444792  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.444801  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:20.444807  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:20.444874  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:20.471987  623133 cri.go:89] found id: ""
	I1205 07:35:20.472013  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.472022  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:20.472029  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:20.472108  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:20.505537  623133 cri.go:89] found id: ""
	I1205 07:35:20.505563  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.505572  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:20.505578  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:20.505640  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:20.533233  623133 cri.go:89] found id: ""
	I1205 07:35:20.533263  623133 logs.go:282] 0 containers: []
	W1205 07:35:20.533272  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:20.533281  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:20.533292  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:20.603153  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:20.603190  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:20.620780  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:20.620809  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:20.687478  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:20.687497  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:20.687510  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:20.730326  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:20.730358  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:23.260021  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:23.275717  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:23.275795  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:23.316847  623133 cri.go:89] found id: ""
	I1205 07:35:23.316869  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.316878  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:23.316884  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:23.316944  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:23.346738  623133 cri.go:89] found id: ""
	I1205 07:35:23.346761  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.346770  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:23.346777  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:23.346838  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:23.375120  623133 cri.go:89] found id: ""
	I1205 07:35:23.375149  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.375160  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:23.375166  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:23.375229  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:23.407114  623133 cri.go:89] found id: ""
	I1205 07:35:23.407140  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.407149  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:23.407155  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:23.407219  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:23.445993  623133 cri.go:89] found id: ""
	I1205 07:35:23.446015  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.446024  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:23.446030  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:23.446094  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:23.473865  623133 cri.go:89] found id: ""
	I1205 07:35:23.473894  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.473903  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:23.473910  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:23.473969  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:23.502010  623133 cri.go:89] found id: ""
	I1205 07:35:23.502041  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.502050  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:23.502056  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:23.502114  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:23.533778  623133 cri.go:89] found id: ""
	I1205 07:35:23.533803  623133 logs.go:282] 0 containers: []
	W1205 07:35:23.533813  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:23.533822  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:23.533840  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:23.608248  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:23.608285  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:23.625824  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:23.625863  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:23.694716  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:23.694738  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:23.694750  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:23.738078  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:23.738115  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:26.271576  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:26.297771  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:26.297839  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:26.352710  623133 cri.go:89] found id: ""
	I1205 07:35:26.352732  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.352740  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:26.352746  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:26.352805  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:26.399795  623133 cri.go:89] found id: ""
	I1205 07:35:26.399817  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.399825  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:26.399831  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:26.399892  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:26.427913  623133 cri.go:89] found id: ""
	I1205 07:35:26.427936  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.427945  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:26.427951  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:26.428013  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:26.457337  623133 cri.go:89] found id: ""
	I1205 07:35:26.457411  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.457434  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:26.457452  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:26.457541  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:26.487614  623133 cri.go:89] found id: ""
	I1205 07:35:26.487639  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.487647  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:26.487654  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:26.487712  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:26.521869  623133 cri.go:89] found id: ""
	I1205 07:35:26.521904  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.521912  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:26.521919  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:26.521979  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:26.560729  623133 cri.go:89] found id: ""
	I1205 07:35:26.560750  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.560759  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:26.560765  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:26.560824  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:26.590730  623133 cri.go:89] found id: ""
	I1205 07:35:26.590751  623133 logs.go:282] 0 containers: []
	W1205 07:35:26.590760  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:26.590768  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:26.590779  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:26.669738  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:26.669815  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:26.687952  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:26.687980  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:26.757191  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:26.757210  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:26.757225  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:26.799173  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:26.799208  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:29.330287  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:29.341741  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:29.341812  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:29.369657  623133 cri.go:89] found id: ""
	I1205 07:35:29.369680  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.369689  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:29.369695  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:29.369754  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:29.399942  623133 cri.go:89] found id: ""
	I1205 07:35:29.399977  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.399990  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:29.399996  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:29.400069  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:29.427230  623133 cri.go:89] found id: ""
	I1205 07:35:29.427255  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.427264  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:29.427270  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:29.427339  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:29.455708  623133 cri.go:89] found id: ""
	I1205 07:35:29.455734  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.455743  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:29.455749  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:29.455837  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:29.483736  623133 cri.go:89] found id: ""
	I1205 07:35:29.483764  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.483773  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:29.483780  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:29.483857  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:29.513033  623133 cri.go:89] found id: ""
	I1205 07:35:29.513058  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.513067  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:29.513074  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:29.513149  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:29.540465  623133 cri.go:89] found id: ""
	I1205 07:35:29.540497  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.540506  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:29.540512  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:29.540579  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:29.568147  623133 cri.go:89] found id: ""
	I1205 07:35:29.568172  623133 logs.go:282] 0 containers: []
	W1205 07:35:29.568180  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:29.568189  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:29.568205  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:29.637152  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:29.637189  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:29.654403  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:29.654588  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:29.722431  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:29.722450  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:29.722471  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:29.766055  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:29.766090  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:32.295838  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:32.317438  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:32.317516  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:32.350531  623133 cri.go:89] found id: ""
	I1205 07:35:32.350553  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.350562  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:32.350568  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:32.350627  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:32.376848  623133 cri.go:89] found id: ""
	I1205 07:35:32.376910  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.376932  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:32.376955  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:32.377040  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:32.402782  623133 cri.go:89] found id: ""
	I1205 07:35:32.402845  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.402859  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:32.402866  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:32.402928  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:32.435338  623133 cri.go:89] found id: ""
	I1205 07:35:32.435401  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.435417  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:32.435428  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:32.435486  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:32.467308  623133 cri.go:89] found id: ""
	I1205 07:35:32.467341  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.467350  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:32.467356  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:32.467423  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:32.493753  623133 cri.go:89] found id: ""
	I1205 07:35:32.493786  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.493795  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:32.493802  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:32.493868  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:32.519565  623133 cri.go:89] found id: ""
	I1205 07:35:32.519587  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.519596  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:32.519602  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:32.519666  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:32.550170  623133 cri.go:89] found id: ""
	I1205 07:35:32.550234  623133 logs.go:282] 0 containers: []
	W1205 07:35:32.550257  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:32.550276  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:32.550314  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:32.621176  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:32.621212  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:32.638470  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:32.638498  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:32.706481  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:32.706503  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:32.706515  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:32.748163  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:32.748193  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:35.278251  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:35.291832  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:35.291925  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:35.323605  623133 cri.go:89] found id: ""
	I1205 07:35:35.323633  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.323642  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:35.323648  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:35.323710  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:35.356333  623133 cri.go:89] found id: ""
	I1205 07:35:35.356358  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.356367  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:35.356373  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:35.356434  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:35.385726  623133 cri.go:89] found id: ""
	I1205 07:35:35.385750  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.385759  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:35.385766  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:35.385826  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:35.411217  623133 cri.go:89] found id: ""
	I1205 07:35:35.411249  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.411259  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:35.411266  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:35.411333  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:35.438397  623133 cri.go:89] found id: ""
	I1205 07:35:35.438424  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.438432  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:35.438441  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:35.438502  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:35.470186  623133 cri.go:89] found id: ""
	I1205 07:35:35.470257  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.470281  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:35.470300  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:35.470421  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:35.506009  623133 cri.go:89] found id: ""
	I1205 07:35:35.506090  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.506113  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:35.506133  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:35.506240  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:35.533594  623133 cri.go:89] found id: ""
	I1205 07:35:35.533666  623133 logs.go:282] 0 containers: []
	W1205 07:35:35.533702  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:35.533728  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:35.533755  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:35.564889  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:35.564918  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:35.633646  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:35.633688  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:35.650787  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:35.650821  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:35.716974  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:35.716999  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:35.717022  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:38.259507  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:38.271366  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:38.271448  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:38.306524  623133 cri.go:89] found id: ""
	I1205 07:35:38.306551  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.306560  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:38.306567  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:38.306626  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:38.336578  623133 cri.go:89] found id: ""
	I1205 07:35:38.336607  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.336616  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:38.336622  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:38.336679  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:38.367906  623133 cri.go:89] found id: ""
	I1205 07:35:38.367929  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.367940  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:38.367948  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:38.368010  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:38.394982  623133 cri.go:89] found id: ""
	I1205 07:35:38.395009  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.395018  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:38.395024  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:38.395091  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:38.425494  623133 cri.go:89] found id: ""
	I1205 07:35:38.425567  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.425589  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:38.425607  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:38.425690  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:38.452246  623133 cri.go:89] found id: ""
	I1205 07:35:38.452324  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.452338  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:38.452346  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:38.452412  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:38.480170  623133 cri.go:89] found id: ""
	I1205 07:35:38.480246  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.480269  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:38.480282  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:38.480356  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:38.507367  623133 cri.go:89] found id: ""
	I1205 07:35:38.507396  623133 logs.go:282] 0 containers: []
	W1205 07:35:38.507405  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:38.507433  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:38.507451  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:38.576229  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:38.576303  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:38.594685  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:38.594753  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:38.663598  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:38.663662  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:38.663690  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:38.704724  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:38.704761  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:41.236036  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:41.247647  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:41.247721  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:41.285033  623133 cri.go:89] found id: ""
	I1205 07:35:41.285066  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.285082  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:41.285089  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:41.285166  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:41.319069  623133 cri.go:89] found id: ""
	I1205 07:35:41.319104  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.319114  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:41.319121  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:41.319180  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:41.359677  623133 cri.go:89] found id: ""
	I1205 07:35:41.359699  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.359708  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:41.359714  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:41.359775  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:41.388379  623133 cri.go:89] found id: ""
	I1205 07:35:41.388403  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.388411  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:41.388417  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:41.388478  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:41.419247  623133 cri.go:89] found id: ""
	I1205 07:35:41.419269  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.419277  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:41.419282  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:41.419345  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:41.449843  623133 cri.go:89] found id: ""
	I1205 07:35:41.449865  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.449874  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:41.449881  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:41.449959  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:41.480824  623133 cri.go:89] found id: ""
	I1205 07:35:41.480845  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.480854  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:41.480863  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:41.480929  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:41.508192  623133 cri.go:89] found id: ""
	I1205 07:35:41.508215  623133 logs.go:282] 0 containers: []
	W1205 07:35:41.508223  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:41.508232  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:41.508245  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:41.576488  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:41.576524  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:41.598285  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:41.598371  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:41.662233  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:41.662299  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:41.662328  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:41.707185  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:41.707221  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:44.245941  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:44.258087  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:44.258159  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:44.317000  623133 cri.go:89] found id: ""
	I1205 07:35:44.317029  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.317038  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:44.317045  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:44.317110  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:44.394121  623133 cri.go:89] found id: ""
	I1205 07:35:44.394144  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.394153  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:44.394160  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:44.394229  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:44.437030  623133 cri.go:89] found id: ""
	I1205 07:35:44.437054  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.437063  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:44.437075  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:44.437135  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:44.470558  623133 cri.go:89] found id: ""
	I1205 07:35:44.470581  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.470589  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:44.470595  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:44.470656  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:44.497595  623133 cri.go:89] found id: ""
	I1205 07:35:44.497620  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.497629  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:44.497635  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:44.497699  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:44.524646  623133 cri.go:89] found id: ""
	I1205 07:35:44.524670  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.524680  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:44.524686  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:44.524749  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:44.551655  623133 cri.go:89] found id: ""
	I1205 07:35:44.551678  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.551687  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:44.551693  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:44.551754  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:44.579156  623133 cri.go:89] found id: ""
	I1205 07:35:44.579183  623133 logs.go:282] 0 containers: []
	W1205 07:35:44.579191  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:44.579200  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:44.579212  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:44.614518  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:44.614559  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:44.684125  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:44.684164  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:44.701948  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:44.701980  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:44.772110  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:44.772140  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:44.772154  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:47.314499  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:47.342978  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:47.343052  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:47.377232  623133 cri.go:89] found id: ""
	I1205 07:35:47.377257  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.377266  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:47.377272  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:47.377332  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:47.435905  623133 cri.go:89] found id: ""
	I1205 07:35:47.435931  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.435940  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:47.435947  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:47.436008  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:47.476871  623133 cri.go:89] found id: ""
	I1205 07:35:47.476900  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.476909  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:47.476915  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:47.476978  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:47.515707  623133 cri.go:89] found id: ""
	I1205 07:35:47.515730  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.515737  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:47.515744  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:47.515802  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:47.566608  623133 cri.go:89] found id: ""
	I1205 07:35:47.566630  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.566639  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:47.566645  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:47.566704  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:47.613376  623133 cri.go:89] found id: ""
	I1205 07:35:47.613447  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.613468  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:47.613487  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:47.613581  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:47.654672  623133 cri.go:89] found id: ""
	I1205 07:35:47.654744  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.654776  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:47.654798  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:47.654891  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:47.700192  623133 cri.go:89] found id: ""
	I1205 07:35:47.700220  623133 logs.go:282] 0 containers: []
	W1205 07:35:47.700230  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:47.700238  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:47.700262  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:47.801667  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:47.801692  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:47.801706  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:47.864937  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:47.865033  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:47.909088  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:47.909114  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:48.027223  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:48.027308  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:50.564928  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:50.576967  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:50.577040  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:50.603254  623133 cri.go:89] found id: ""
	I1205 07:35:50.603279  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.603288  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:50.603295  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:50.603355  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:50.633780  623133 cri.go:89] found id: ""
	I1205 07:35:50.633807  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.633823  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:50.633830  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:50.633975  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:50.660213  623133 cri.go:89] found id: ""
	I1205 07:35:50.660284  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.660300  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:50.660307  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:50.660370  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:50.686224  623133 cri.go:89] found id: ""
	I1205 07:35:50.686249  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.686258  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:50.686264  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:50.686337  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:50.712482  623133 cri.go:89] found id: ""
	I1205 07:35:50.712507  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.712516  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:50.712523  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:50.712608  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:50.749559  623133 cri.go:89] found id: ""
	I1205 07:35:50.749590  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.749598  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:50.749619  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:50.749698  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:50.778145  623133 cri.go:89] found id: ""
	I1205 07:35:50.778187  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.778211  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:50.778225  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:50.778312  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:50.809099  623133 cri.go:89] found id: ""
	I1205 07:35:50.809125  623133 logs.go:282] 0 containers: []
	W1205 07:35:50.809134  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:50.809144  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:50.809181  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:50.878018  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:50.878055  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:50.895717  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:50.895747  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:50.963511  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:50.963532  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:50.963545  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:51.005406  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:51.005450  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:53.569351  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:53.581182  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:53.581255  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:53.607740  623133 cri.go:89] found id: ""
	I1205 07:35:53.607766  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.607774  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:53.607781  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:53.607841  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:53.636057  623133 cri.go:89] found id: ""
	I1205 07:35:53.636081  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.636090  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:53.636097  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:53.636159  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:53.663173  623133 cri.go:89] found id: ""
	I1205 07:35:53.663199  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.663207  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:53.663214  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:53.663277  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:53.693748  623133 cri.go:89] found id: ""
	I1205 07:35:53.693773  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.693782  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:53.693788  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:53.693859  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:53.724994  623133 cri.go:89] found id: ""
	I1205 07:35:53.725017  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.725026  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:53.725032  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:53.725094  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:53.754865  623133 cri.go:89] found id: ""
	I1205 07:35:53.754895  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.754905  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:53.754943  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:53.755033  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:53.787222  623133 cri.go:89] found id: ""
	I1205 07:35:53.787247  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.787255  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:53.787265  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:53.787352  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:53.815912  623133 cri.go:89] found id: ""
	I1205 07:35:53.815940  623133 logs.go:282] 0 containers: []
	W1205 07:35:53.815948  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:53.815958  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:53.815970  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:53.847683  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:53.847712  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:53.916815  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:53.916857  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:53.934552  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:53.934581  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:54.004532  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:54.004629  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:54.004683  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:56.559153  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:56.570670  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:56.570737  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:56.599272  623133 cri.go:89] found id: ""
	I1205 07:35:56.599295  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.599303  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:56.599311  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:56.599374  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:56.626235  623133 cri.go:89] found id: ""
	I1205 07:35:56.626258  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.626266  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:56.626271  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:56.626366  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:56.652818  623133 cri.go:89] found id: ""
	I1205 07:35:56.652845  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.652854  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:56.652861  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:56.652924  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:56.684281  623133 cri.go:89] found id: ""
	I1205 07:35:56.684303  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.684312  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:56.684318  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:56.684423  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:56.710261  623133 cri.go:89] found id: ""
	I1205 07:35:56.710334  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.710355  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:56.710386  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:56.710476  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:56.735343  623133 cri.go:89] found id: ""
	I1205 07:35:56.735407  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.735429  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:56.735441  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:56.735518  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:56.762116  623133 cri.go:89] found id: ""
	I1205 07:35:56.762141  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.762150  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:56.762156  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:56.762237  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:56.789155  623133 cri.go:89] found id: ""
	I1205 07:35:56.789226  623133 logs.go:282] 0 containers: []
	W1205 07:35:56.789259  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:56.789282  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:56.789306  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:56.861111  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:56.861148  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:56.878351  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:56.878479  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:56.947731  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:56.947753  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:56.947765  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:56.990704  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:56.990739  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:35:59.532232  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:35:59.543697  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:35:59.543797  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:35:59.570183  623133 cri.go:89] found id: ""
	I1205 07:35:59.570219  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.570229  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:35:59.570235  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:35:59.570312  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:35:59.596233  623133 cri.go:89] found id: ""
	I1205 07:35:59.596310  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.596334  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:35:59.596348  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:35:59.596431  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:35:59.623607  623133 cri.go:89] found id: ""
	I1205 07:35:59.623628  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.623637  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:35:59.623643  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:35:59.623709  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:35:59.653332  623133 cri.go:89] found id: ""
	I1205 07:35:59.653404  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.653425  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:35:59.653443  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:35:59.653528  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:35:59.680459  623133 cri.go:89] found id: ""
	I1205 07:35:59.680536  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.680559  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:35:59.680573  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:35:59.680647  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:35:59.711708  623133 cri.go:89] found id: ""
	I1205 07:35:59.711730  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.711739  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:35:59.711745  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:35:59.711855  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:35:59.738669  623133 cri.go:89] found id: ""
	I1205 07:35:59.738694  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.738702  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:35:59.738708  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:35:59.738776  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:35:59.766245  623133 cri.go:89] found id: ""
	I1205 07:35:59.766322  623133 logs.go:282] 0 containers: []
	W1205 07:35:59.766346  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:35:59.766367  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:35:59.766424  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:35:59.835869  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:35:59.835905  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:35:59.853506  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:35:59.853534  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:35:59.923069  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:35:59.923130  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:35:59.923150  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:35:59.964633  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:35:59.964670  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:02.505791  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:02.517569  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:02.517638  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:02.546226  623133 cri.go:89] found id: ""
	I1205 07:36:02.546249  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.546258  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:02.546264  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:02.546331  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:02.583989  623133 cri.go:89] found id: ""
	I1205 07:36:02.584029  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.584055  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:02.584067  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:02.584144  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:02.616589  623133 cri.go:89] found id: ""
	I1205 07:36:02.616612  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.616620  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:02.616627  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:02.616686  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:02.643056  623133 cri.go:89] found id: ""
	I1205 07:36:02.643088  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.643097  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:02.643103  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:02.643171  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:02.669212  623133 cri.go:89] found id: ""
	I1205 07:36:02.669237  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.669245  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:02.669251  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:02.669313  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:02.697796  623133 cri.go:89] found id: ""
	I1205 07:36:02.697823  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.697832  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:02.697838  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:02.697902  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:02.724881  623133 cri.go:89] found id: ""
	I1205 07:36:02.724908  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.724917  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:02.724923  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:02.724980  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:02.757704  623133 cri.go:89] found id: ""
	I1205 07:36:02.757783  623133 logs.go:282] 0 containers: []
	W1205 07:36:02.757819  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:02.757845  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:02.757869  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:02.839594  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:02.839672  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:02.858254  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:02.858329  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:02.933973  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:02.934042  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:02.934069  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:02.975016  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:02.975049  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:05.520089  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:05.532552  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:05.532682  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:05.565018  623133 cri.go:89] found id: ""
	I1205 07:36:05.565095  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.565118  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:05.565137  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:05.565225  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:05.599011  623133 cri.go:89] found id: ""
	I1205 07:36:05.599033  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.599042  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:05.599048  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:05.599107  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:05.625636  623133 cri.go:89] found id: ""
	I1205 07:36:05.625673  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.625682  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:05.625689  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:05.625762  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:05.653415  623133 cri.go:89] found id: ""
	I1205 07:36:05.653440  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.653449  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:05.653455  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:05.653541  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:05.680558  623133 cri.go:89] found id: ""
	I1205 07:36:05.680636  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.680658  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:05.680670  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:05.680751  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:05.709360  623133 cri.go:89] found id: ""
	I1205 07:36:05.709385  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.709394  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:05.709400  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:05.709465  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:05.740362  623133 cri.go:89] found id: ""
	I1205 07:36:05.740393  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.740402  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:05.740408  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:05.740466  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:05.769029  623133 cri.go:89] found id: ""
	I1205 07:36:05.769102  623133 logs.go:282] 0 containers: []
	W1205 07:36:05.769125  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:05.769145  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:05.769182  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:05.786199  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:05.786279  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:05.855622  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:05.855684  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:05.855713  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:05.897617  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:05.897654  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:05.929823  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:05.929852  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:08.499406  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:08.511587  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:08.511653  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:08.560924  623133 cri.go:89] found id: ""
	I1205 07:36:08.560945  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.560954  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:08.560960  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:08.561018  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:08.650497  623133 cri.go:89] found id: ""
	I1205 07:36:08.650540  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.650549  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:08.650556  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:08.650630  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:08.686061  623133 cri.go:89] found id: ""
	I1205 07:36:08.686082  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.686091  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:08.686097  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:08.686155  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:08.726774  623133 cri.go:89] found id: ""
	I1205 07:36:08.726796  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.726813  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:08.726820  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:08.726875  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:08.764613  623133 cri.go:89] found id: ""
	I1205 07:36:08.764633  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.764657  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:08.764663  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:08.764723  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:08.797157  623133 cri.go:89] found id: ""
	I1205 07:36:08.797221  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.797244  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:08.797262  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:08.797348  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:08.827856  623133 cri.go:89] found id: ""
	I1205 07:36:08.827920  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.827943  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:08.827963  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:08.828048  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:08.861187  623133 cri.go:89] found id: ""
	I1205 07:36:08.861251  623133 logs.go:282] 0 containers: []
	W1205 07:36:08.861274  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:08.861296  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:08.861350  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:08.977880  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:08.977905  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:08.977919  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:09.035653  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:09.035751  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:09.079335  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:09.079366  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:09.181786  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:09.181891  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:11.701654  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:11.713324  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:11.713435  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:11.739977  623133 cri.go:89] found id: ""
	I1205 07:36:11.740003  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.740012  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:11.740019  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:11.740097  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:11.767105  623133 cri.go:89] found id: ""
	I1205 07:36:11.767130  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.767138  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:11.767144  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:11.767202  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:11.794403  623133 cri.go:89] found id: ""
	I1205 07:36:11.794432  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.794441  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:11.794447  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:11.794517  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:11.821313  623133 cri.go:89] found id: ""
	I1205 07:36:11.821345  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.821354  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:11.821360  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:11.821427  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:11.846886  623133 cri.go:89] found id: ""
	I1205 07:36:11.846914  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.846923  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:11.846929  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:11.846998  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:11.873358  623133 cri.go:89] found id: ""
	I1205 07:36:11.873383  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.873392  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:11.873399  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:11.873459  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:11.900545  623133 cri.go:89] found id: ""
	I1205 07:36:11.900571  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.900580  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:11.900586  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:11.900644  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:11.942604  623133 cri.go:89] found id: ""
	I1205 07:36:11.942632  623133 logs.go:282] 0 containers: []
	W1205 07:36:11.942641  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:11.942650  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:11.942660  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:12.028055  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:12.028089  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:12.048432  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:12.048463  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:12.130435  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:12.130466  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:12.130479  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:12.181698  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:12.181736  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:14.721919  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:14.733340  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:14.733410  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:14.760021  623133 cri.go:89] found id: ""
	I1205 07:36:14.760044  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.760053  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:14.760060  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:14.760118  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:14.785943  623133 cri.go:89] found id: ""
	I1205 07:36:14.785968  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.785977  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:14.785983  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:14.786046  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:14.813563  623133 cri.go:89] found id: ""
	I1205 07:36:14.813589  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.813598  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:14.813604  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:14.813663  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:14.840490  623133 cri.go:89] found id: ""
	I1205 07:36:14.840514  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.840522  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:14.840528  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:14.840590  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:14.871351  623133 cri.go:89] found id: ""
	I1205 07:36:14.871379  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.871389  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:14.871394  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:14.871457  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:14.898197  623133 cri.go:89] found id: ""
	I1205 07:36:14.898220  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.898228  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:14.898244  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:14.898308  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:14.929494  623133 cri.go:89] found id: ""
	I1205 07:36:14.929518  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.929527  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:14.929532  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:14.929592  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:14.956987  623133 cri.go:89] found id: ""
	I1205 07:36:14.957013  623133 logs.go:282] 0 containers: []
	W1205 07:36:14.957022  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:14.957031  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:14.957043  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:14.998587  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:14.998623  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:15.056488  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:15.056523  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:15.127897  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:15.127936  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:15.145168  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:15.145208  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:15.213701  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:17.714521  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:17.725996  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:17.726073  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:17.752765  623133 cri.go:89] found id: ""
	I1205 07:36:17.752791  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.752799  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:17.752806  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:17.752864  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:17.779636  623133 cri.go:89] found id: ""
	I1205 07:36:17.779667  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.779676  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:17.779681  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:17.779742  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:17.808690  623133 cri.go:89] found id: ""
	I1205 07:36:17.808714  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.808723  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:17.808729  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:17.808788  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:17.837250  623133 cri.go:89] found id: ""
	I1205 07:36:17.837275  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.837284  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:17.837290  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:17.837348  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:17.865090  623133 cri.go:89] found id: ""
	I1205 07:36:17.865116  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.865125  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:17.865131  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:17.865192  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:17.891528  623133 cri.go:89] found id: ""
	I1205 07:36:17.891552  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.891561  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:17.891568  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:17.891626  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:17.925673  623133 cri.go:89] found id: ""
	I1205 07:36:17.925694  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.925705  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:17.925711  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:17.925768  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:17.954215  623133 cri.go:89] found id: ""
	I1205 07:36:17.954237  623133 logs.go:282] 0 containers: []
	W1205 07:36:17.954246  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:17.954254  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:17.954265  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:17.984998  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:17.985023  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:18.056119  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:18.056157  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:18.073417  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:18.073445  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:18.143297  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:18.143318  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:18.143331  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:20.688342  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:20.699997  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:20.700078  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:20.728204  623133 cri.go:89] found id: ""
	I1205 07:36:20.728228  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.728238  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:20.728244  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:20.728303  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:20.754593  623133 cri.go:89] found id: ""
	I1205 07:36:20.754620  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.754628  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:20.754637  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:20.754696  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:20.780818  623133 cri.go:89] found id: ""
	I1205 07:36:20.780841  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.780850  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:20.780856  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:20.780914  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:20.811216  623133 cri.go:89] found id: ""
	I1205 07:36:20.811238  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.811246  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:20.811252  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:20.811307  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:20.837132  623133 cri.go:89] found id: ""
	I1205 07:36:20.837154  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.837162  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:20.837168  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:20.837230  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:20.867275  623133 cri.go:89] found id: ""
	I1205 07:36:20.867296  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.867306  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:20.867312  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:20.867372  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:20.893314  623133 cri.go:89] found id: ""
	I1205 07:36:20.893387  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.893408  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:20.893426  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:20.893510  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:20.919418  623133 cri.go:89] found id: ""
	I1205 07:36:20.919441  623133 logs.go:282] 0 containers: []
	W1205 07:36:20.919449  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:20.919458  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:20.919469  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:20.988766  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:20.988802  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:21.006526  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:21.006577  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:21.078706  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:21.078760  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:21.078796  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:21.121421  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:21.121457  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:23.651500  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:23.663717  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:23.663790  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:23.690545  623133 cri.go:89] found id: ""
	I1205 07:36:23.690570  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.690579  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:23.690586  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:23.690643  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:23.721201  623133 cri.go:89] found id: ""
	I1205 07:36:23.721269  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.721278  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:23.721285  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:23.721345  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:23.747938  623133 cri.go:89] found id: ""
	I1205 07:36:23.747960  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.747969  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:23.747976  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:23.748065  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:23.777606  623133 cri.go:89] found id: ""
	I1205 07:36:23.777630  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.777638  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:23.777644  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:23.777703  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:23.807298  623133 cri.go:89] found id: ""
	I1205 07:36:23.807326  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.807334  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:23.807341  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:23.807453  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:23.833834  623133 cri.go:89] found id: ""
	I1205 07:36:23.833859  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.833867  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:23.833874  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:23.833992  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:23.861200  623133 cri.go:89] found id: ""
	I1205 07:36:23.861226  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.861234  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:23.861241  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:23.861333  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:23.887373  623133 cri.go:89] found id: ""
	I1205 07:36:23.887396  623133 logs.go:282] 0 containers: []
	W1205 07:36:23.887405  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:23.887414  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:23.887425  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:23.904820  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:23.904850  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:23.972851  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:23.972870  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:23.972883  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:24.014965  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:24.015008  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:24.048742  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:24.048774  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:26.618551  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:26.630674  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:26.630750  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:26.659507  623133 cri.go:89] found id: ""
	I1205 07:36:26.659531  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.659539  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:26.659544  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:26.659603  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:26.691859  623133 cri.go:89] found id: ""
	I1205 07:36:26.691882  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.691890  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:26.691896  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:26.691954  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:26.719463  623133 cri.go:89] found id: ""
	I1205 07:36:26.719489  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.719498  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:26.719504  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:26.719566  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:26.746276  623133 cri.go:89] found id: ""
	I1205 07:36:26.746300  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.746310  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:26.746316  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:26.746390  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:26.772116  623133 cri.go:89] found id: ""
	I1205 07:36:26.772140  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.772148  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:26.772153  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:26.772212  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:26.803825  623133 cri.go:89] found id: ""
	I1205 07:36:26.803850  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.803860  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:26.803867  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:26.803932  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:26.832885  623133 cri.go:89] found id: ""
	I1205 07:36:26.832910  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.832918  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:26.832925  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:26.833013  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:26.861131  623133 cri.go:89] found id: ""
	I1205 07:36:26.861155  623133 logs.go:282] 0 containers: []
	W1205 07:36:26.861164  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:26.861172  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:26.861207  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:26.894076  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:26.894109  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:26.963869  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:26.963910  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:26.980636  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:26.980665  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:27.052332  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:27.052357  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:27.052370  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:29.595593  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:29.606935  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:29.607005  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:29.637665  623133 cri.go:89] found id: ""
	I1205 07:36:29.637690  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.637699  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:29.637705  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:29.637763  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:29.664047  623133 cri.go:89] found id: ""
	I1205 07:36:29.664070  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.664078  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:29.664085  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:29.664145  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:29.697984  623133 cri.go:89] found id: ""
	I1205 07:36:29.698007  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.698015  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:29.698022  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:29.698086  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:29.725724  623133 cri.go:89] found id: ""
	I1205 07:36:29.725751  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.725760  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:29.725767  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:29.725830  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:29.756683  623133 cri.go:89] found id: ""
	I1205 07:36:29.756711  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.756719  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:29.756725  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:29.756784  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:29.785133  623133 cri.go:89] found id: ""
	I1205 07:36:29.785159  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.785167  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:29.785174  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:29.785240  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:29.814431  623133 cri.go:89] found id: ""
	I1205 07:36:29.814457  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.814466  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:29.814472  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:29.814531  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:29.845515  623133 cri.go:89] found id: ""
	I1205 07:36:29.845536  623133 logs.go:282] 0 containers: []
	W1205 07:36:29.845544  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:29.845552  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:29.845563  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:29.911724  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:29.911788  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:29.911815  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:29.957166  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:29.957273  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:29.989490  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:29.989521  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:30.105936  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:30.105977  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:32.626473  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:32.639201  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:32.639285  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:32.680686  623133 cri.go:89] found id: ""
	I1205 07:36:32.680712  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.680721  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:32.680733  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:32.680792  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:32.724835  623133 cri.go:89] found id: ""
	I1205 07:36:32.724860  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.724869  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:32.724875  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:32.724965  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:32.767521  623133 cri.go:89] found id: ""
	I1205 07:36:32.767547  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.767556  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:32.767568  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:32.767632  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:32.817771  623133 cri.go:89] found id: ""
	I1205 07:36:32.817798  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.817813  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:32.817820  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:32.817892  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:32.862000  623133 cri.go:89] found id: ""
	I1205 07:36:32.862030  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.862039  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:32.862046  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:32.862154  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:32.907800  623133 cri.go:89] found id: ""
	I1205 07:36:32.907825  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.907834  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:32.907862  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:32.907951  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:32.961426  623133 cri.go:89] found id: ""
	I1205 07:36:32.961488  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.961512  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:32.961530  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:32.961605  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:32.995954  623133 cri.go:89] found id: ""
	I1205 07:36:32.996017  623133 logs.go:282] 0 containers: []
	W1205 07:36:32.996040  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:32.996061  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:32.996086  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:33.016371  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:33.016455  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:33.108764  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:33.108826  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:33.108863  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:33.154598  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:33.154674  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:33.190998  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:33.191065  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:35.766510  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:35.777907  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:35.777980  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:35.805606  623133 cri.go:89] found id: ""
	I1205 07:36:35.805632  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.805640  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:35.805647  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:35.805706  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:35.837042  623133 cri.go:89] found id: ""
	I1205 07:36:35.837069  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.837078  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:35.837084  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:35.837143  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:35.864444  623133 cri.go:89] found id: ""
	I1205 07:36:35.864470  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.864480  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:35.864487  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:35.864545  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:35.891752  623133 cri.go:89] found id: ""
	I1205 07:36:35.891785  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.891794  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:35.891800  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:35.891864  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:35.917936  623133 cri.go:89] found id: ""
	I1205 07:36:35.917962  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.917971  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:35.917977  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:35.918041  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:35.944753  623133 cri.go:89] found id: ""
	I1205 07:36:35.944778  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.944787  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:35.944793  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:35.944856  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:35.970357  623133 cri.go:89] found id: ""
	I1205 07:36:35.970400  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.970409  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:35.970418  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:35.970475  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:35.995833  623133 cri.go:89] found id: ""
	I1205 07:36:35.995856  623133 logs.go:282] 0 containers: []
	W1205 07:36:35.995864  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:35.995880  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:35.995892  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:36.067353  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:36.067386  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:36.084563  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:36.084590  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:36.146861  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:36.146884  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:36.146897  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:36.188939  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:36.188973  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:38.718520  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:38.730148  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:38.730216  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:38.756158  623133 cri.go:89] found id: ""
	I1205 07:36:38.756180  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.756189  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:38.756195  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:38.756254  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:38.781731  623133 cri.go:89] found id: ""
	I1205 07:36:38.781754  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.781762  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:38.781770  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:38.781829  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:38.808400  623133 cri.go:89] found id: ""
	I1205 07:36:38.808422  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.808431  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:38.808439  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:38.808498  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:38.833741  623133 cri.go:89] found id: ""
	I1205 07:36:38.833766  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.833781  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:38.833787  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:38.833853  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:38.860164  623133 cri.go:89] found id: ""
	I1205 07:36:38.860186  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.860202  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:38.860209  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:38.860267  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:38.886547  623133 cri.go:89] found id: ""
	I1205 07:36:38.886620  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.886645  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:38.886663  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:38.886738  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:38.913053  623133 cri.go:89] found id: ""
	I1205 07:36:38.913076  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.913085  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:38.913091  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:38.913151  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:38.942175  623133 cri.go:89] found id: ""
	I1205 07:36:38.942201  623133 logs.go:282] 0 containers: []
	W1205 07:36:38.942210  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:38.942220  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:38.942233  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:39.004598  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:39.004619  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:39.004634  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:39.051611  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:39.051656  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:39.091735  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:39.091770  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:39.160888  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:39.160933  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:41.678940  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:41.690723  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:41.690800  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:41.718838  623133 cri.go:89] found id: ""
	I1205 07:36:41.718865  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.718874  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:41.718880  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:41.718941  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:41.745475  623133 cri.go:89] found id: ""
	I1205 07:36:41.745496  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.745505  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:41.745510  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:41.745567  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:41.772421  623133 cri.go:89] found id: ""
	I1205 07:36:41.772443  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.772451  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:41.772457  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:41.772516  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:41.798738  623133 cri.go:89] found id: ""
	I1205 07:36:41.798760  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.798768  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:41.798775  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:41.798837  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:41.829608  623133 cri.go:89] found id: ""
	I1205 07:36:41.829629  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.829638  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:41.829643  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:41.829703  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:41.855808  623133 cri.go:89] found id: ""
	I1205 07:36:41.855886  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.855897  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:41.855905  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:41.855964  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:41.881750  623133 cri.go:89] found id: ""
	I1205 07:36:41.881772  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.881787  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:41.881793  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:41.881854  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:41.912350  623133 cri.go:89] found id: ""
	I1205 07:36:41.912376  623133 logs.go:282] 0 containers: []
	W1205 07:36:41.912385  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:41.912393  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:41.912424  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:41.980253  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:41.980289  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:41.997265  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:41.997295  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:42.069674  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:42.069752  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:42.069784  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:42.113803  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:42.113858  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:44.650303  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:44.661716  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:44.661807  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:44.688281  623133 cri.go:89] found id: ""
	I1205 07:36:44.688303  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.688312  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:44.688319  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:44.688400  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:44.718237  623133 cri.go:89] found id: ""
	I1205 07:36:44.718310  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.718333  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:44.718352  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:44.718454  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:44.744555  623133 cri.go:89] found id: ""
	I1205 07:36:44.744582  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.744590  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:44.744596  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:44.744674  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:44.770781  623133 cri.go:89] found id: ""
	I1205 07:36:44.770841  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.770862  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:44.770882  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:44.770945  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:44.796457  623133 cri.go:89] found id: ""
	I1205 07:36:44.796525  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.796548  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:44.796562  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:44.796641  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:44.823185  623133 cri.go:89] found id: ""
	I1205 07:36:44.823209  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.823217  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:44.823223  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:44.823305  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:44.853469  623133 cri.go:89] found id: ""
	I1205 07:36:44.853493  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.853501  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:44.853507  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:44.853585  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:44.879627  623133 cri.go:89] found id: ""
	I1205 07:36:44.879655  623133 logs.go:282] 0 containers: []
	W1205 07:36:44.879664  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:44.879673  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:44.879703  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:44.944839  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:44.944902  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:44.944929  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:44.986754  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:44.986789  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:45.049483  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:45.049522  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:45.135789  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:45.135841  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:47.658843  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:47.670262  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:47.670336  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:47.696591  623133 cri.go:89] found id: ""
	I1205 07:36:47.696615  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.696624  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:47.696630  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:47.696687  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:47.722572  623133 cri.go:89] found id: ""
	I1205 07:36:47.722595  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.722604  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:47.722610  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:47.722670  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:47.748591  623133 cri.go:89] found id: ""
	I1205 07:36:47.748614  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.748622  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:47.748628  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:47.748688  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:47.774567  623133 cri.go:89] found id: ""
	I1205 07:36:47.774589  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.774597  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:47.774603  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:47.774660  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:47.801273  623133 cri.go:89] found id: ""
	I1205 07:36:47.801295  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.801303  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:47.801310  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:47.801368  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:47.827082  623133 cri.go:89] found id: ""
	I1205 07:36:47.827106  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.827115  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:47.827121  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:47.827180  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:47.855377  623133 cri.go:89] found id: ""
	I1205 07:36:47.855401  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.855416  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:47.855423  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:47.855484  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:47.882730  623133 cri.go:89] found id: ""
	I1205 07:36:47.882754  623133 logs.go:282] 0 containers: []
	W1205 07:36:47.882763  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:47.882771  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:47.882782  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:47.926090  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:47.926129  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:47.960407  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:47.960433  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:48.029778  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:48.029815  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:48.047114  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:48.047146  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:48.109573  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:50.610507  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:50.621875  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:50.621973  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:50.647844  623133 cri.go:89] found id: ""
	I1205 07:36:50.647867  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.647876  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:50.647882  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:50.647940  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:50.673668  623133 cri.go:89] found id: ""
	I1205 07:36:50.673692  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.673701  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:50.673707  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:50.673775  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:50.700037  623133 cri.go:89] found id: ""
	I1205 07:36:50.700064  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.700074  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:50.700080  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:50.700139  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:50.726113  623133 cri.go:89] found id: ""
	I1205 07:36:50.726136  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.726145  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:50.726178  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:50.726239  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:50.751894  623133 cri.go:89] found id: ""
	I1205 07:36:50.751922  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.751931  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:50.751937  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:50.751998  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:50.781058  623133 cri.go:89] found id: ""
	I1205 07:36:50.781081  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.781089  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:50.781095  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:50.781155  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:50.809679  623133 cri.go:89] found id: ""
	I1205 07:36:50.809701  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.809709  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:50.809714  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:50.809774  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:50.836822  623133 cri.go:89] found id: ""
	I1205 07:36:50.836849  623133 logs.go:282] 0 containers: []
	W1205 07:36:50.836857  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:50.836866  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:50.836878  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:50.902895  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:50.902914  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:50.902929  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:50.945317  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:50.945357  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:50.977406  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:50.977435  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:51.048092  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:51.048127  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:53.566926  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:53.583436  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:53.583504  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:53.645666  623133 cri.go:89] found id: ""
	I1205 07:36:53.645695  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.645704  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:53.645711  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:53.645766  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:53.693265  623133 cri.go:89] found id: ""
	I1205 07:36:53.693287  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.693296  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:53.693302  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:53.693366  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:53.728128  623133 cri.go:89] found id: ""
	I1205 07:36:53.728152  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.728160  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:53.728166  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:53.728233  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:53.762494  623133 cri.go:89] found id: ""
	I1205 07:36:53.762516  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.762524  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:53.762532  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:53.762594  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:53.804428  623133 cri.go:89] found id: ""
	I1205 07:36:53.804454  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.804463  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:53.804469  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:53.804527  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:53.844179  623133 cri.go:89] found id: ""
	I1205 07:36:53.844206  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.844215  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:53.844221  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:53.844281  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:53.880909  623133 cri.go:89] found id: ""
	I1205 07:36:53.880930  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.880942  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:53.880947  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:53.881010  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:53.919438  623133 cri.go:89] found id: ""
	I1205 07:36:53.919460  623133 logs.go:282] 0 containers: []
	W1205 07:36:53.919468  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:53.919477  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:53.919488  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:53.958772  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:53.958807  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:54.032348  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:54.032384  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:54.049737  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:54.049768  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:54.122354  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:54.122391  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:54.122404  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:56.666487  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:56.679179  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:56.679255  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:56.713236  623133 cri.go:89] found id: ""
	I1205 07:36:56.713263  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.713271  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:56.713277  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:56.713336  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:56.760822  623133 cri.go:89] found id: ""
	I1205 07:36:56.760849  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.760858  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:56.760864  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:56.760924  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:56.791190  623133 cri.go:89] found id: ""
	I1205 07:36:56.791211  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.791219  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:56.791224  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:56.791287  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:56.827502  623133 cri.go:89] found id: ""
	I1205 07:36:56.827528  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.827536  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:56.827543  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:56.827600  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:56.873318  623133 cri.go:89] found id: ""
	I1205 07:36:56.873346  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.873355  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:56.873363  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:56.873422  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:56.913462  623133 cri.go:89] found id: ""
	I1205 07:36:56.913488  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.913504  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:56.913511  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:56.913582  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:56.946678  623133 cri.go:89] found id: ""
	I1205 07:36:56.946711  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.946720  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:56.946726  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:56.946795  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:36:56.980971  623133 cri.go:89] found id: ""
	I1205 07:36:56.981004  623133 logs.go:282] 0 containers: []
	W1205 07:36:56.981012  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:36:56.981021  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:36:56.981032  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:36:57.021489  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:36:57.021516  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:36:57.105196  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:36:57.105231  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:36:57.127498  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:36:57.127530  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:36:57.216469  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:36:57.216491  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:36:57.216509  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:36:59.768062  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:36:59.779612  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:36:59.779688  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:36:59.806686  623133 cri.go:89] found id: ""
	I1205 07:36:59.806719  623133 logs.go:282] 0 containers: []
	W1205 07:36:59.806728  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:36:59.806735  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:36:59.806795  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:36:59.833076  623133 cri.go:89] found id: ""
	I1205 07:36:59.833103  623133 logs.go:282] 0 containers: []
	W1205 07:36:59.833112  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:36:59.833118  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:36:59.833179  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:36:59.860619  623133 cri.go:89] found id: ""
	I1205 07:36:59.860650  623133 logs.go:282] 0 containers: []
	W1205 07:36:59.860660  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:36:59.860666  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:36:59.860728  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:36:59.887700  623133 cri.go:89] found id: ""
	I1205 07:36:59.887726  623133 logs.go:282] 0 containers: []
	W1205 07:36:59.887735  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:36:59.887741  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:36:59.887804  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:36:59.918752  623133 cri.go:89] found id: ""
	I1205 07:36:59.918780  623133 logs.go:282] 0 containers: []
	W1205 07:36:59.918788  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:36:59.918795  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:36:59.918854  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:36:59.952195  623133 cri.go:89] found id: ""
	I1205 07:36:59.952222  623133 logs.go:282] 0 containers: []
	W1205 07:36:59.952230  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:36:59.952236  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:36:59.952298  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:36:59.986909  623133 cri.go:89] found id: ""
	I1205 07:36:59.986933  623133 logs.go:282] 0 containers: []
	W1205 07:36:59.986942  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:36:59.986948  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:36:59.987007  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:00.025243  623133 cri.go:89] found id: ""
	I1205 07:37:00.025277  623133 logs.go:282] 0 containers: []
	W1205 07:37:00.025287  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:00.025297  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:00.025310  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:00.133650  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:00.133794  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:00.174908  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:00.175004  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:00.419963  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:00.419989  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:00.420002  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:00.469322  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:00.469365  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:03.007606  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:03.019920  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:03.019996  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:03.047874  623133 cri.go:89] found id: ""
	I1205 07:37:03.047900  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.047908  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:03.047915  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:03.047974  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:03.077900  623133 cri.go:89] found id: ""
	I1205 07:37:03.077926  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.077935  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:03.077941  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:03.078000  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:03.108637  623133 cri.go:89] found id: ""
	I1205 07:37:03.108661  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.108669  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:03.108675  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:03.108738  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:03.135818  623133 cri.go:89] found id: ""
	I1205 07:37:03.135842  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.135851  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:03.135857  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:03.135916  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:03.162936  623133 cri.go:89] found id: ""
	I1205 07:37:03.162961  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.162970  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:03.162976  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:03.163038  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:03.190431  623133 cri.go:89] found id: ""
	I1205 07:37:03.190454  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.190463  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:03.190469  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:03.190535  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:03.217323  623133 cri.go:89] found id: ""
	I1205 07:37:03.217348  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.217357  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:03.217363  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:03.217425  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:03.243770  623133 cri.go:89] found id: ""
	I1205 07:37:03.243795  623133 logs.go:282] 0 containers: []
	W1205 07:37:03.243804  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:03.243813  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:03.243825  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:03.280044  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:03.280070  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:03.355040  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:03.355079  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:03.376430  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:03.376456  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:03.442790  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:03.442810  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:03.442822  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:05.985558  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:05.997492  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:05.997565  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:06.027621  623133 cri.go:89] found id: ""
	I1205 07:37:06.027649  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.027658  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:06.027665  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:06.027728  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:06.056303  623133 cri.go:89] found id: ""
	I1205 07:37:06.056329  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.056338  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:06.056345  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:06.056407  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:06.082946  623133 cri.go:89] found id: ""
	I1205 07:37:06.082975  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.082984  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:06.082990  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:06.083051  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:06.110249  623133 cri.go:89] found id: ""
	I1205 07:37:06.110272  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.110280  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:06.110287  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:06.110352  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:06.138349  623133 cri.go:89] found id: ""
	I1205 07:37:06.138430  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.138454  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:06.138473  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:06.138547  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:06.166423  623133 cri.go:89] found id: ""
	I1205 07:37:06.166450  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.166459  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:06.166465  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:06.166545  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:06.194682  623133 cri.go:89] found id: ""
	I1205 07:37:06.194705  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.194714  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:06.194721  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:06.194787  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:06.222035  623133 cri.go:89] found id: ""
	I1205 07:37:06.222062  623133 logs.go:282] 0 containers: []
	W1205 07:37:06.222071  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:06.222086  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:06.222098  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:06.239427  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:06.239455  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:06.342330  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:06.342349  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:06.342362  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:06.386534  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:06.386571  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:06.418070  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:06.418095  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:08.991807  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:09.003989  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:09.004072  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:09.039749  623133 cri.go:89] found id: ""
	I1205 07:37:09.039773  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.039781  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:09.039787  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:09.039851  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:09.067252  623133 cri.go:89] found id: ""
	I1205 07:37:09.067276  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.067284  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:09.067290  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:09.067348  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:09.097825  623133 cri.go:89] found id: ""
	I1205 07:37:09.097858  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.097866  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:09.097873  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:09.097958  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:09.125410  623133 cri.go:89] found id: ""
	I1205 07:37:09.125434  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.125443  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:09.125449  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:09.125525  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:09.154341  623133 cri.go:89] found id: ""
	I1205 07:37:09.154369  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.154399  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:09.154406  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:09.154476  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:09.186627  623133 cri.go:89] found id: ""
	I1205 07:37:09.186649  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.186657  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:09.186663  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:09.186739  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:09.213192  623133 cri.go:89] found id: ""
	I1205 07:37:09.213218  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.213227  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:09.213233  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:09.213317  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:09.240176  623133 cri.go:89] found id: ""
	I1205 07:37:09.240201  623133 logs.go:282] 0 containers: []
	W1205 07:37:09.240210  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:09.240219  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:09.240230  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:09.319728  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:09.319815  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:09.339900  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:09.339978  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:09.410785  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:09.410857  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:09.410900  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:09.452951  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:09.452988  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:11.983313  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:11.994935  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:11.995007  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:12.025870  623133 cri.go:89] found id: ""
	I1205 07:37:12.025900  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.025909  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:12.025915  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:12.025988  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:12.053481  623133 cri.go:89] found id: ""
	I1205 07:37:12.053545  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.053578  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:12.053596  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:12.053701  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:12.080316  623133 cri.go:89] found id: ""
	I1205 07:37:12.080389  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.080403  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:12.080410  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:12.080478  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:12.109665  623133 cri.go:89] found id: ""
	I1205 07:37:12.109690  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.109698  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:12.109705  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:12.109781  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:12.136870  623133 cri.go:89] found id: ""
	I1205 07:37:12.136895  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.136904  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:12.136910  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:12.136992  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:12.163193  623133 cri.go:89] found id: ""
	I1205 07:37:12.163218  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.163227  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:12.163233  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:12.163302  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:12.191098  623133 cri.go:89] found id: ""
	I1205 07:37:12.191121  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.191130  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:12.191135  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:12.191206  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:12.223690  623133 cri.go:89] found id: ""
	I1205 07:37:12.223751  623133 logs.go:282] 0 containers: []
	W1205 07:37:12.223773  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:12.223790  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:12.223803  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:12.241197  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:12.241227  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:12.320568  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:12.320588  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:12.320601  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:12.375102  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:12.375141  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:12.405035  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:12.405061  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:14.979078  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:14.990894  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:14.990968  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:15.032498  623133 cri.go:89] found id: ""
	I1205 07:37:15.032523  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.032532  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:15.032539  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:15.032621  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:15.066851  623133 cri.go:89] found id: ""
	I1205 07:37:15.066876  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.066885  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:15.066892  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:15.066960  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:15.095623  623133 cri.go:89] found id: ""
	I1205 07:37:15.095649  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.095662  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:15.095668  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:15.095738  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:15.126029  623133 cri.go:89] found id: ""
	I1205 07:37:15.126054  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.126064  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:15.126071  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:15.126134  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:15.154394  623133 cri.go:89] found id: ""
	I1205 07:37:15.154422  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.154431  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:15.154437  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:15.154500  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:15.183041  623133 cri.go:89] found id: ""
	I1205 07:37:15.183112  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.183135  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:15.183147  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:15.183231  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:15.210689  623133 cri.go:89] found id: ""
	I1205 07:37:15.210714  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.210723  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:15.210729  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:15.210793  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:15.237769  623133 cri.go:89] found id: ""
	I1205 07:37:15.237802  623133 logs.go:282] 0 containers: []
	W1205 07:37:15.237812  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:15.237846  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:15.237877  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:15.281680  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:15.281760  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:15.323765  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:15.323801  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:15.400395  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:15.400442  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:15.417663  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:15.417695  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:15.484840  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:17.986536  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:17.998400  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:17.998473  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:18.033996  623133 cri.go:89] found id: ""
	I1205 07:37:18.034063  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.034085  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:18.034102  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:18.034189  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:18.065301  623133 cri.go:89] found id: ""
	I1205 07:37:18.065325  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.065333  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:18.065340  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:18.065404  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:18.092861  623133 cri.go:89] found id: ""
	I1205 07:37:18.092884  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.092893  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:18.092899  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:18.092961  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:18.120143  623133 cri.go:89] found id: ""
	I1205 07:37:18.120168  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.120176  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:18.120182  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:18.120243  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:18.147610  623133 cri.go:89] found id: ""
	I1205 07:37:18.147635  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.147643  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:18.147651  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:18.147714  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:18.178215  623133 cri.go:89] found id: ""
	I1205 07:37:18.178241  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.178250  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:18.178257  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:18.178318  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:18.204905  623133 cri.go:89] found id: ""
	I1205 07:37:18.204933  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.204942  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:18.204949  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:18.205010  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:18.232470  623133 cri.go:89] found id: ""
	I1205 07:37:18.232494  623133 logs.go:282] 0 containers: []
	W1205 07:37:18.232503  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:18.232512  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:18.232541  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:18.264179  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:18.264206  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:18.342898  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:18.342985  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:18.360665  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:18.360689  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:18.430869  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:18.430932  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:18.430949  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:20.973333  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:20.985181  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:20.985252  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:21.013254  623133 cri.go:89] found id: ""
	I1205 07:37:21.013282  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.013292  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:21.013299  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:21.013364  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:21.040770  623133 cri.go:89] found id: ""
	I1205 07:37:21.040792  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.040801  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:21.040807  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:21.040870  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:21.067164  623133 cri.go:89] found id: ""
	I1205 07:37:21.067188  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.067197  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:21.067204  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:21.067263  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:21.092914  623133 cri.go:89] found id: ""
	I1205 07:37:21.092938  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.092946  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:21.092953  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:21.093010  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:21.122974  623133 cri.go:89] found id: ""
	I1205 07:37:21.122998  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.123007  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:21.123013  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:21.123075  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:21.150045  623133 cri.go:89] found id: ""
	I1205 07:37:21.150070  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.150080  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:21.150087  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:21.150147  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:21.175912  623133 cri.go:89] found id: ""
	I1205 07:37:21.175937  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.175945  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:21.175952  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:21.176013  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:21.204695  623133 cri.go:89] found id: ""
	I1205 07:37:21.204719  623133 logs.go:282] 0 containers: []
	W1205 07:37:21.204728  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:21.204737  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:21.204748  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:21.272612  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:21.272657  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:21.295476  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:21.295504  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:21.369059  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:21.369077  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:21.369090  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:21.410783  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:21.410822  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:23.941231  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:23.954200  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:23.954272  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:23.987169  623133 cri.go:89] found id: ""
	I1205 07:37:23.987196  623133 logs.go:282] 0 containers: []
	W1205 07:37:23.987204  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:23.987210  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:23.987270  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:24.027401  623133 cri.go:89] found id: ""
	I1205 07:37:24.027429  623133 logs.go:282] 0 containers: []
	W1205 07:37:24.027438  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:24.027444  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:24.027506  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:24.062105  623133 cri.go:89] found id: ""
	I1205 07:37:24.062131  623133 logs.go:282] 0 containers: []
	W1205 07:37:24.062140  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:24.062146  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:24.062210  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:24.097962  623133 cri.go:89] found id: ""
	I1205 07:37:24.098065  623133 logs.go:282] 0 containers: []
	W1205 07:37:24.098078  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:24.098086  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:24.098154  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:24.149807  623133 cri.go:89] found id: ""
	I1205 07:37:24.149829  623133 logs.go:282] 0 containers: []
	W1205 07:37:24.149838  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:24.149844  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:24.149925  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:24.197313  623133 cri.go:89] found id: ""
	I1205 07:37:24.197336  623133 logs.go:282] 0 containers: []
	W1205 07:37:24.197350  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:24.197357  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:24.197418  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:24.232474  623133 cri.go:89] found id: ""
	I1205 07:37:24.232495  623133 logs.go:282] 0 containers: []
	W1205 07:37:24.232504  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:24.232509  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:24.232566  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:24.264865  623133 cri.go:89] found id: ""
	I1205 07:37:24.264887  623133 logs.go:282] 0 containers: []
	W1205 07:37:24.264895  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:24.264906  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:24.264917  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:24.323711  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:24.323751  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:24.355011  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:24.355045  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:24.428217  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:24.428256  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:24.445628  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:24.445666  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:24.514101  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:27.014509  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:27.027348  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:27.027415  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:27.061309  623133 cri.go:89] found id: ""
	I1205 07:37:27.061332  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.061342  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:27.061349  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:27.061407  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:27.096624  623133 cri.go:89] found id: ""
	I1205 07:37:27.096646  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.096654  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:27.096660  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:27.096718  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:27.129592  623133 cri.go:89] found id: ""
	I1205 07:37:27.129614  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.129623  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:27.129629  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:27.129687  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:27.164759  623133 cri.go:89] found id: ""
	I1205 07:37:27.164781  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.164790  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:27.164796  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:27.164854  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:27.195546  623133 cri.go:89] found id: ""
	I1205 07:37:27.195566  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.195575  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:27.195581  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:27.195639  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:27.239662  623133 cri.go:89] found id: ""
	I1205 07:37:27.239724  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.239749  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:27.239767  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:27.239848  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:27.276924  623133 cri.go:89] found id: ""
	I1205 07:37:27.276985  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.277009  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:27.277026  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:27.277101  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:27.321825  623133 cri.go:89] found id: ""
	I1205 07:37:27.321888  623133 logs.go:282] 0 containers: []
	W1205 07:37:27.321921  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:27.321951  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:27.321976  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:27.382661  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:27.382901  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:27.432349  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:27.432372  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:27.507481  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:27.507557  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:27.528622  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:27.528654  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:27.617956  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:30.118298  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:30.131826  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:30.131908  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:30.163368  623133 cri.go:89] found id: ""
	I1205 07:37:30.163393  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.163402  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:30.163409  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:30.163475  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:30.191270  623133 cri.go:89] found id: ""
	I1205 07:37:30.191296  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.191305  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:30.191312  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:30.191376  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:30.244652  623133 cri.go:89] found id: ""
	I1205 07:37:30.244678  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.244687  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:30.244693  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:30.244752  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:30.300382  623133 cri.go:89] found id: ""
	I1205 07:37:30.300407  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.300418  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:30.300425  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:30.300486  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:30.352330  623133 cri.go:89] found id: ""
	I1205 07:37:30.352355  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.352364  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:30.352369  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:30.352429  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:30.399014  623133 cri.go:89] found id: ""
	I1205 07:37:30.399042  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.399051  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:30.399058  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:30.399116  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:30.438638  623133 cri.go:89] found id: ""
	I1205 07:37:30.438711  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.438735  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:30.438753  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:30.438847  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:30.468699  623133 cri.go:89] found id: ""
	I1205 07:37:30.468770  623133 logs.go:282] 0 containers: []
	W1205 07:37:30.468794  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:30.468815  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:30.468853  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:30.491364  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:30.491443  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:30.600758  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:30.600818  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:30.600853  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:30.653847  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:30.653925  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:30.692716  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:30.692740  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:33.283849  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:33.296601  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:33.296670  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:33.336764  623133 cri.go:89] found id: ""
	I1205 07:37:33.336784  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.336793  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:33.336799  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:33.336859  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:33.366168  623133 cri.go:89] found id: ""
	I1205 07:37:33.366190  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.366199  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:33.366205  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:33.366263  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:33.393947  623133 cri.go:89] found id: ""
	I1205 07:37:33.393981  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.393990  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:33.393997  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:33.394060  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:33.420964  623133 cri.go:89] found id: ""
	I1205 07:37:33.420986  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.420995  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:33.421001  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:33.421059  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:33.448694  623133 cri.go:89] found id: ""
	I1205 07:37:33.448715  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.448724  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:33.448730  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:33.448791  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:33.480119  623133 cri.go:89] found id: ""
	I1205 07:37:33.480145  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.480153  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:33.480160  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:33.480218  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:33.510238  623133 cri.go:89] found id: ""
	I1205 07:37:33.510259  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.510267  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:33.510273  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:33.510329  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:33.536428  623133 cri.go:89] found id: ""
	I1205 07:37:33.536504  623133 logs.go:282] 0 containers: []
	W1205 07:37:33.536526  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:33.536544  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:33.536571  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:33.604085  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:33.604121  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:33.620749  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:33.620779  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:33.693039  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:33.693058  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:33.693114  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:33.740243  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:33.740279  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:36.282529  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:36.294453  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:36.294520  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:36.326106  623133 cri.go:89] found id: ""
	I1205 07:37:36.326128  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.326136  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:36.326143  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:36.326203  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:36.355992  623133 cri.go:89] found id: ""
	I1205 07:37:36.356015  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.356024  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:36.356030  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:36.356097  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:36.383283  623133 cri.go:89] found id: ""
	I1205 07:37:36.383306  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.383315  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:36.383320  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:36.383381  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:36.410513  623133 cri.go:89] found id: ""
	I1205 07:37:36.410535  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.410544  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:36.410550  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:36.410611  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:36.436427  623133 cri.go:89] found id: ""
	I1205 07:37:36.436449  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.436457  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:36.436463  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:36.436523  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:36.467173  623133 cri.go:89] found id: ""
	I1205 07:37:36.467195  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.467203  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:36.467209  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:36.467267  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:36.493667  623133 cri.go:89] found id: ""
	I1205 07:37:36.493693  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.493702  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:36.493709  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:36.493771  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:36.521299  623133 cri.go:89] found id: ""
	I1205 07:37:36.521328  623133 logs.go:282] 0 containers: []
	W1205 07:37:36.521337  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:36.521345  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:36.521357  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:36.593980  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:36.594018  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:36.611865  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:36.611895  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:36.680485  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:36.680509  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:36.680523  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:36.721651  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:36.721685  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:39.256293  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:39.269557  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:39.269637  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:39.351475  623133 cri.go:89] found id: ""
	I1205 07:37:39.351502  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.351511  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:39.351517  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:39.351574  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:39.407112  623133 cri.go:89] found id: ""
	I1205 07:37:39.407139  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.407148  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:39.407154  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:39.407235  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:39.442327  623133 cri.go:89] found id: ""
	I1205 07:37:39.442354  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.442362  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:39.442369  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:39.442451  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:39.470723  623133 cri.go:89] found id: ""
	I1205 07:37:39.470748  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.470758  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:39.470764  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:39.470823  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:39.498451  623133 cri.go:89] found id: ""
	I1205 07:37:39.498478  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.498487  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:39.498494  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:39.498564  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:39.528889  623133 cri.go:89] found id: ""
	I1205 07:37:39.528915  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.528924  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:39.528931  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:39.528988  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:39.561692  623133 cri.go:89] found id: ""
	I1205 07:37:39.561713  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.561721  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:39.561727  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:39.561784  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:39.602143  623133 cri.go:89] found id: ""
	I1205 07:37:39.602166  623133 logs.go:282] 0 containers: []
	W1205 07:37:39.602175  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:39.602183  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:39.602197  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:39.621511  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:39.621586  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:39.712668  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:39.712725  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:39.712759  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:39.759415  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:39.759453  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:39.789286  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:39.789315  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:42.362498  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:42.374415  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:42.374487  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:42.401551  623133 cri.go:89] found id: ""
	I1205 07:37:42.401576  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.401585  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:42.401591  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:42.401651  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:42.427668  623133 cri.go:89] found id: ""
	I1205 07:37:42.427694  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.427703  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:42.427709  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:42.427769  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:42.453202  623133 cri.go:89] found id: ""
	I1205 07:37:42.453225  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.453234  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:42.453239  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:42.453297  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:42.479442  623133 cri.go:89] found id: ""
	I1205 07:37:42.479466  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.479475  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:42.479481  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:42.479548  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:42.508769  623133 cri.go:89] found id: ""
	I1205 07:37:42.508796  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.508804  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:42.508810  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:42.508867  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:42.535229  623133 cri.go:89] found id: ""
	I1205 07:37:42.535254  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.535264  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:42.535270  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:42.535328  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:42.562034  623133 cri.go:89] found id: ""
	I1205 07:37:42.562060  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.562068  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:42.562075  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:42.562133  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:42.587878  623133 cri.go:89] found id: ""
	I1205 07:37:42.587901  623133 logs.go:282] 0 containers: []
	W1205 07:37:42.587910  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:42.587918  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:42.587931  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:42.655453  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:42.655486  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:42.673247  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:42.673274  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:42.742137  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:42.742160  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:42.742172  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:42.783536  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:42.783571  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:45.312786  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:45.327261  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:45.327340  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:45.359136  623133 cri.go:89] found id: ""
	I1205 07:37:45.359169  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.359178  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:45.359185  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:45.359253  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:45.390295  623133 cri.go:89] found id: ""
	I1205 07:37:45.390330  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.390339  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:45.390346  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:45.390446  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:45.416374  623133 cri.go:89] found id: ""
	I1205 07:37:45.416397  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.416406  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:45.416412  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:45.416485  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:45.444625  623133 cri.go:89] found id: ""
	I1205 07:37:45.444656  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.444665  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:45.444671  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:45.444740  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:45.471741  623133 cri.go:89] found id: ""
	I1205 07:37:45.471767  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.471776  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:45.471782  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:45.471843  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:45.506346  623133 cri.go:89] found id: ""
	I1205 07:37:45.506391  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.506401  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:45.506407  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:45.506467  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:45.534196  623133 cri.go:89] found id: ""
	I1205 07:37:45.534223  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.534232  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:45.534239  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:45.534301  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:45.563646  623133 cri.go:89] found id: ""
	I1205 07:37:45.563669  623133 logs.go:282] 0 containers: []
	W1205 07:37:45.563679  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:45.563687  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:45.563699  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:45.592547  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:45.592573  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:45.661349  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:45.661388  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:45.678770  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:45.678799  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:45.745343  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:45.745367  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:45.745379  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:48.287140  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:48.311013  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:48.311095  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:48.347148  623133 cri.go:89] found id: ""
	I1205 07:37:48.347175  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.347196  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:48.347229  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:48.347306  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:48.374340  623133 cri.go:89] found id: ""
	I1205 07:37:48.374371  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.374405  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:48.374412  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:48.374486  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:48.402443  623133 cri.go:89] found id: ""
	I1205 07:37:48.402505  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.402527  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:48.402547  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:48.402630  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:48.429750  623133 cri.go:89] found id: ""
	I1205 07:37:48.429784  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.429792  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:48.429799  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:48.429867  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:48.456820  623133 cri.go:89] found id: ""
	I1205 07:37:48.456861  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.456870  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:48.456881  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:48.456951  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:48.483759  623133 cri.go:89] found id: ""
	I1205 07:37:48.483828  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.483842  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:48.483850  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:48.483913  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:48.511489  623133 cri.go:89] found id: ""
	I1205 07:37:48.511515  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.511524  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:48.511532  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:48.511602  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:48.539566  623133 cri.go:89] found id: ""
	I1205 07:37:48.539599  623133 logs.go:282] 0 containers: []
	W1205 07:37:48.539610  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:48.539619  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:48.539631  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:48.557269  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:48.557297  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:48.622989  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:48.623053  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:48.623076  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:48.665361  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:48.665402  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:48.695563  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:48.695595  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:51.264533  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:51.277364  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:51.277434  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:51.327267  623133 cri.go:89] found id: ""
	I1205 07:37:51.327289  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.327299  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:51.327305  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:51.327368  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:51.357527  623133 cri.go:89] found id: ""
	I1205 07:37:51.357550  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.357558  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:51.357564  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:51.357626  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:51.388385  623133 cri.go:89] found id: ""
	I1205 07:37:51.388409  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.388419  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:51.388425  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:51.388484  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:51.414814  623133 cri.go:89] found id: ""
	I1205 07:37:51.414839  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.414848  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:51.414855  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:51.414915  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:51.446434  623133 cri.go:89] found id: ""
	I1205 07:37:51.446458  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.446467  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:51.446473  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:51.446534  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:51.475155  623133 cri.go:89] found id: ""
	I1205 07:37:51.475177  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.475186  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:51.475192  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:51.475249  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:51.502337  623133 cri.go:89] found id: ""
	I1205 07:37:51.502363  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.502393  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:51.502401  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:51.502462  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:51.533217  623133 cri.go:89] found id: ""
	I1205 07:37:51.533242  623133 logs.go:282] 0 containers: []
	W1205 07:37:51.533252  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:51.533261  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:51.533272  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:51.600592  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:51.600626  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:51.618566  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:51.618605  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:51.690167  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:51.690189  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:51.690202  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:51.730912  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:51.730946  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:54.265121  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:54.282923  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:54.283005  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:54.323948  623133 cri.go:89] found id: ""
	I1205 07:37:54.324025  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.324047  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:54.324066  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:54.324155  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:54.357613  623133 cri.go:89] found id: ""
	I1205 07:37:54.357637  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.357647  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:54.357653  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:54.357715  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:54.385423  623133 cri.go:89] found id: ""
	I1205 07:37:54.385446  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.385456  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:54.385464  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:54.385525  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:54.414528  623133 cri.go:89] found id: ""
	I1205 07:37:54.414553  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.414562  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:54.414568  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:54.414628  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:54.441704  623133 cri.go:89] found id: ""
	I1205 07:37:54.441734  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.441744  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:54.441750  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:54.441821  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:54.467906  623133 cri.go:89] found id: ""
	I1205 07:37:54.467973  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.467999  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:54.468017  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:54.468100  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:54.499049  623133 cri.go:89] found id: ""
	I1205 07:37:54.499115  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.499137  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:54.499148  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:54.499234  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:54.524967  623133 cri.go:89] found id: ""
	I1205 07:37:54.525035  623133 logs.go:282] 0 containers: []
	W1205 07:37:54.525056  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:54.525077  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:54.525113  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:37:54.597685  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:54.597731  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:54.614869  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:54.614897  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:54.679536  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:54.679558  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:54.679572  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:54.719851  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:54.719886  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:57.252514  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:37:57.263936  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:37:57.264004  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:37:57.291702  623133 cri.go:89] found id: ""
	I1205 07:37:57.291722  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.291731  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:37:57.291738  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:37:57.291800  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:37:57.339451  623133 cri.go:89] found id: ""
	I1205 07:37:57.339480  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.339488  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:37:57.339495  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:37:57.339553  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:37:57.370758  623133 cri.go:89] found id: ""
	I1205 07:37:57.370785  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.370794  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:37:57.370801  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:37:57.370862  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:37:57.401753  623133 cri.go:89] found id: ""
	I1205 07:37:57.401780  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.401789  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:37:57.401796  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:37:57.401864  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:37:57.428969  623133 cri.go:89] found id: ""
	I1205 07:37:57.428994  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.429002  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:37:57.429008  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:37:57.429064  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:37:57.456327  623133 cri.go:89] found id: ""
	I1205 07:37:57.456349  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.456358  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:37:57.456365  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:37:57.456425  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:37:57.484113  623133 cri.go:89] found id: ""
	I1205 07:37:57.484136  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.484144  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:37:57.484150  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:37:57.484211  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:37:57.511491  623133 cri.go:89] found id: ""
	I1205 07:37:57.511517  623133 logs.go:282] 0 containers: []
	W1205 07:37:57.511526  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:37:57.511535  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:37:57.511547  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:37:57.528657  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:37:57.528687  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:37:57.600576  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:37:57.600594  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:37:57.600606  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:37:57.641163  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:37:57.641198  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:37:57.671137  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:37:57.671165  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:38:00.239259  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:38:00.286348  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:38:00.286450  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:38:00.340351  623133 cri.go:89] found id: ""
	I1205 07:38:00.340378  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.340388  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:38:00.340395  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:38:00.340462  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:38:00.375987  623133 cri.go:89] found id: ""
	I1205 07:38:00.376017  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.376026  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:38:00.376032  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:38:00.376095  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:38:00.409639  623133 cri.go:89] found id: ""
	I1205 07:38:00.409668  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.409678  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:38:00.409684  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:38:00.409746  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:38:00.437452  623133 cri.go:89] found id: ""
	I1205 07:38:00.437481  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.437496  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:38:00.437502  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:38:00.437566  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:38:00.467065  623133 cri.go:89] found id: ""
	I1205 07:38:00.467105  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.467115  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:38:00.467121  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:38:00.467196  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:38:00.495997  623133 cri.go:89] found id: ""
	I1205 07:38:00.496031  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.496041  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:38:00.496047  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:38:00.496110  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:38:00.523333  623133 cri.go:89] found id: ""
	I1205 07:38:00.523355  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.523364  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:38:00.523370  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:38:00.523437  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:38:00.549966  623133 cri.go:89] found id: ""
	I1205 07:38:00.549988  623133 logs.go:282] 0 containers: []
	W1205 07:38:00.549996  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:38:00.550005  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:38:00.550017  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:38:00.619575  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:38:00.619612  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:38:00.637941  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:38:00.637969  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:38:00.705650  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:38:00.705668  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:38:00.705681  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:38:00.745862  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:38:00.745895  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:38:03.279420  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:38:03.292055  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:38:03.292128  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:38:03.328516  623133 cri.go:89] found id: ""
	I1205 07:38:03.328537  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.328545  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:38:03.328551  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:38:03.328610  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:38:03.362681  623133 cri.go:89] found id: ""
	I1205 07:38:03.362702  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.362710  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:38:03.362716  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:38:03.362778  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:38:03.389004  623133 cri.go:89] found id: ""
	I1205 07:38:03.389032  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.389041  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:38:03.389047  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:38:03.389107  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:38:03.417355  623133 cri.go:89] found id: ""
	I1205 07:38:03.417381  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.417389  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:38:03.417395  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:38:03.417507  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:38:03.444075  623133 cri.go:89] found id: ""
	I1205 07:38:03.444143  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.444163  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:38:03.444174  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:38:03.444245  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:38:03.471159  623133 cri.go:89] found id: ""
	I1205 07:38:03.471186  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.471195  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:38:03.471208  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:38:03.471268  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:38:03.497093  623133 cri.go:89] found id: ""
	I1205 07:38:03.497128  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.497137  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:38:03.497159  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:38:03.497250  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:38:03.523894  623133 cri.go:89] found id: ""
	I1205 07:38:03.523959  623133 logs.go:282] 0 containers: []
	W1205 07:38:03.523981  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:38:03.524000  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:38:03.524036  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:38:03.591116  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:38:03.591150  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:38:03.608407  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:38:03.608435  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:38:03.674876  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:38:03.674940  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:38:03.674966  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:38:03.715710  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:38:03.715742  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:38:06.244417  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:38:06.256202  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:38:06.256271  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:38:06.295340  623133 cri.go:89] found id: ""
	I1205 07:38:06.295362  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.295370  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:38:06.295376  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:38:06.295433  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:38:06.339068  623133 cri.go:89] found id: ""
	I1205 07:38:06.339091  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.339099  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:38:06.339105  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:38:06.339164  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:38:06.365924  623133 cri.go:89] found id: ""
	I1205 07:38:06.365948  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.365956  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:38:06.365962  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:38:06.366026  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:38:06.394724  623133 cri.go:89] found id: ""
	I1205 07:38:06.394747  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.394755  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:38:06.394761  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:38:06.394826  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:38:06.422747  623133 cri.go:89] found id: ""
	I1205 07:38:06.422771  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.422779  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:38:06.422785  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:38:06.422844  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:38:06.449609  623133 cri.go:89] found id: ""
	I1205 07:38:06.449630  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.449639  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:38:06.449645  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:38:06.449707  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:38:06.482278  623133 cri.go:89] found id: ""
	I1205 07:38:06.482301  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.482309  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:38:06.482317  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:38:06.482404  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:38:06.510024  623133 cri.go:89] found id: ""
	I1205 07:38:06.510047  623133 logs.go:282] 0 containers: []
	W1205 07:38:06.510055  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:38:06.510064  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:38:06.510078  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:38:06.578438  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:38:06.578483  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:38:06.595930  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:38:06.595959  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:38:06.666812  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:38:06.666833  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:38:06.666845  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:38:06.707733  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:38:06.707770  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:38:09.240011  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:38:09.251820  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:38:09.251890  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:38:09.283431  623133 cri.go:89] found id: ""
	I1205 07:38:09.283453  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.283461  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:38:09.283468  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:38:09.283525  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:38:09.323094  623133 cri.go:89] found id: ""
	I1205 07:38:09.323114  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.323122  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:38:09.323128  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:38:09.323185  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:38:09.353758  623133 cri.go:89] found id: ""
	I1205 07:38:09.353779  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.353787  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:38:09.353793  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:38:09.353851  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:38:09.381582  623133 cri.go:89] found id: ""
	I1205 07:38:09.381672  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.381706  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:38:09.381753  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:38:09.381877  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:38:09.410562  623133 cri.go:89] found id: ""
	I1205 07:38:09.410584  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.410592  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:38:09.410598  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:38:09.410658  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:38:09.437488  623133 cri.go:89] found id: ""
	I1205 07:38:09.437510  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.437519  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:38:09.437526  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:38:09.437585  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:38:09.464481  623133 cri.go:89] found id: ""
	I1205 07:38:09.464509  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.464520  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:38:09.464526  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:38:09.464588  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:38:09.491746  623133 cri.go:89] found id: ""
	I1205 07:38:09.491770  623133 logs.go:282] 0 containers: []
	W1205 07:38:09.491778  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:38:09.491787  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:38:09.491799  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:38:09.532393  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:38:09.532425  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:38:09.564064  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:38:09.564148  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:38:09.632264  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:38:09.632306  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:38:09.650278  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:38:09.650309  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:38:09.723385  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:38:12.223654  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:38:12.235122  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:38:12.235193  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:38:12.265256  623133 cri.go:89] found id: ""
	I1205 07:38:12.265282  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.265291  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:38:12.265303  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:38:12.265362  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:38:12.295095  623133 cri.go:89] found id: ""
	I1205 07:38:12.295119  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.295128  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:38:12.295134  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:38:12.295193  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:38:12.340220  623133 cri.go:89] found id: ""
	I1205 07:38:12.340245  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.340253  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:38:12.340259  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:38:12.340325  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:38:12.369020  623133 cri.go:89] found id: ""
	I1205 07:38:12.369048  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.369056  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:38:12.369062  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:38:12.369121  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:38:12.398717  623133 cri.go:89] found id: ""
	I1205 07:38:12.398742  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.398751  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:38:12.398757  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:38:12.398821  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:38:12.425439  623133 cri.go:89] found id: ""
	I1205 07:38:12.425467  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.425476  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:38:12.425482  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:38:12.425542  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:38:12.452599  623133 cri.go:89] found id: ""
	I1205 07:38:12.452625  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.452634  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:38:12.452640  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:38:12.452699  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:38:12.482645  623133 cri.go:89] found id: ""
	I1205 07:38:12.482671  623133 logs.go:282] 0 containers: []
	W1205 07:38:12.482680  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:38:12.482688  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:38:12.482700  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:38:12.549449  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:38:12.549488  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:38:12.566130  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:38:12.566159  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:38:12.636858  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:38:12.636920  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:38:12.636945  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:38:12.679193  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:38:12.679227  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:38:15.208854  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:38:15.220344  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:38:15.220414  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:38:15.251564  623133 cri.go:89] found id: ""
	I1205 07:38:15.251590  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.251599  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:38:15.251605  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:38:15.251669  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:38:15.286419  623133 cri.go:89] found id: ""
	I1205 07:38:15.286442  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.286450  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:38:15.286456  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:38:15.286521  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:38:15.332555  623133 cri.go:89] found id: ""
	I1205 07:38:15.332578  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.332586  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:38:15.332593  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:38:15.332656  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:38:15.360463  623133 cri.go:89] found id: ""
	I1205 07:38:15.360487  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.360497  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:38:15.360503  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:38:15.360566  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:38:15.386851  623133 cri.go:89] found id: ""
	I1205 07:38:15.386878  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.386887  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:38:15.386893  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:38:15.386952  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:38:15.414110  623133 cri.go:89] found id: ""
	I1205 07:38:15.414142  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.414150  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:38:15.414157  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:38:15.414215  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:38:15.442253  623133 cri.go:89] found id: ""
	I1205 07:38:15.442276  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.442286  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:38:15.442293  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:38:15.442364  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:38:15.469082  623133 cri.go:89] found id: ""
	I1205 07:38:15.469107  623133 logs.go:282] 0 containers: []
	W1205 07:38:15.469115  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:38:15.469124  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:38:15.469136  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 07:38:15.511869  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:38:15.511906  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:38:15.541075  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:38:15.541107  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:38:15.611976  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:38:15.612013  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:38:15.629319  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:38:15.629349  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:38:15.693200  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:38:18.194059  623133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:38:18.206756  623133 kubeadm.go:602] duration metric: took 4m3.662249592s to restartPrimaryControlPlane
	W1205 07:38:18.206820  623133 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 07:38:18.206878  623133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 07:38:18.637573  623133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:38:18.651664  623133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:38:18.661093  623133 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:38:18.661154  623133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:38:18.670801  623133 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:38:18.670820  623133 kubeadm.go:158] found existing configuration files:
	
	I1205 07:38:18.670874  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:38:18.680050  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:38:18.680128  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:38:18.688892  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:38:18.697874  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:38:18.697968  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:38:18.706737  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:38:18.715401  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:38:18.715474  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:38:18.724265  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:38:18.733064  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:38:18.733129  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:38:18.741768  623133 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:38:18.864779  623133 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:38:18.865335  623133 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:38:18.933098  623133 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:42:19.938171  623133 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1205 07:42:19.938230  623133 kubeadm.go:319] 
	I1205 07:42:19.938315  623133 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:42:19.943080  623133 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:42:19.943141  623133 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:42:19.943231  623133 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:42:19.943287  623133 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:42:19.943323  623133 kubeadm.go:319] OS: Linux
	I1205 07:42:19.943367  623133 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:42:19.943415  623133 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:42:19.943462  623133 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:42:19.943510  623133 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:42:19.943557  623133 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:42:19.943606  623133 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:42:19.943652  623133 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:42:19.943700  623133 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:42:19.943746  623133 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:42:19.943818  623133 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:42:19.943912  623133 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:42:19.944002  623133 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:42:19.944064  623133 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:42:19.947339  623133 out.go:252]   - Generating certificates and keys ...
	I1205 07:42:19.947430  623133 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:42:19.947497  623133 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:42:19.947573  623133 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:42:19.947633  623133 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:42:19.947703  623133 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:42:19.947757  623133 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:42:19.947820  623133 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:42:19.947883  623133 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:42:19.947957  623133 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:42:19.948029  623133 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:42:19.948068  623133 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:42:19.948123  623133 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:42:19.948174  623133 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:42:19.948230  623133 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:42:19.948289  623133 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:42:19.948352  623133 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:42:19.948407  623133 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:42:19.948490  623133 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:42:19.948555  623133 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:42:19.951499  623133 out.go:252]   - Booting up control plane ...
	I1205 07:42:19.951614  623133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:42:19.951701  623133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:42:19.951771  623133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:42:19.951877  623133 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:42:19.951974  623133 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:42:19.952089  623133 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:42:19.952176  623133 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:42:19.952219  623133 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:42:19.952354  623133 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:42:19.952461  623133 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:42:19.952527  623133 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000992095s
	I1205 07:42:19.952535  623133 kubeadm.go:319] 
	I1205 07:42:19.952592  623133 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:42:19.952627  623133 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:42:19.952739  623133 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:42:19.952746  623133 kubeadm.go:319] 
	I1205 07:42:19.952850  623133 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:42:19.952885  623133 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:42:19.952929  623133 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1205 07:42:19.953041  623133 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992095s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000992095s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 07:42:19.953130  623133 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 07:42:19.953395  623133 kubeadm.go:319] 
	I1205 07:42:20.373299  623133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:42:20.388334  623133 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:42:20.388398  623133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:42:20.397738  623133 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:42:20.397761  623133 kubeadm.go:158] found existing configuration files:
	
	I1205 07:42:20.397823  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:42:20.406815  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:42:20.406888  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:42:20.415592  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:42:20.424696  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:42:20.424769  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:42:20.433678  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:42:20.442801  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:42:20.442870  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:42:20.451837  623133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:42:20.460966  623133 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:42:20.461079  623133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:42:20.469788  623133 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:42:20.515785  623133 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1205 07:42:20.516060  623133 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:42:20.596417  623133 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:42:20.596489  623133 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:42:20.596528  623133 kubeadm.go:319] OS: Linux
	I1205 07:42:20.596575  623133 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:42:20.596625  623133 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:42:20.596674  623133 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:42:20.596723  623133 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:42:20.596772  623133 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:42:20.596823  623133 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:42:20.596868  623133 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:42:20.596917  623133 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:42:20.596965  623133 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:42:20.672903  623133 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:42:20.673012  623133 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:42:20.673102  623133 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:42:20.686991  623133 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:42:20.692315  623133 out.go:252]   - Generating certificates and keys ...
	I1205 07:42:20.692434  623133 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:42:20.692515  623133 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:42:20.692611  623133 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 07:42:20.692681  623133 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1205 07:42:20.692770  623133 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 07:42:20.692835  623133 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1205 07:42:20.692907  623133 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1205 07:42:20.692977  623133 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1205 07:42:20.693061  623133 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 07:42:20.693151  623133 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 07:42:20.693190  623133 kubeadm.go:319] [certs] Using the existing "sa" key
	I1205 07:42:20.693256  623133 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:42:20.853689  623133 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:42:21.058903  623133 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:42:21.380530  623133 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:42:21.804166  623133 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:42:22.053736  623133 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:42:22.054457  623133 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:42:22.057205  623133 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:42:22.060451  623133 out.go:252]   - Booting up control plane ...
	I1205 07:42:22.060549  623133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:42:22.060622  623133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:42:22.061914  623133 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:42:22.077990  623133 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:42:22.078211  623133 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:42:22.086102  623133 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:42:22.086953  623133 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:42:22.087206  623133 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:42:22.215696  623133 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:42:22.215837  623133 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:46:22.216452  623133 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001059422s
	I1205 07:46:22.216484  623133 kubeadm.go:319] 
	I1205 07:46:22.216539  623133 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:46:22.216570  623133 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:46:22.216668  623133 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:46:22.216674  623133 kubeadm.go:319] 
	I1205 07:46:22.216772  623133 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:46:22.216802  623133 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:46:22.216832  623133 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:46:22.216835  623133 kubeadm.go:319] 
	I1205 07:46:22.221347  623133 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:46:22.221774  623133 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:46:22.221881  623133 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:46:22.222153  623133 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:46:22.222160  623133 kubeadm.go:319] 
	I1205 07:46:22.222228  623133 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:46:22.222279  623133 kubeadm.go:403] duration metric: took 12m7.744210771s to StartCluster
	I1205 07:46:22.222312  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:22.222371  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:22.285575  623133 cri.go:89] found id: ""
	I1205 07:46:22.285598  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.285606  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:22.285612  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:46:22.285677  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:22.328010  623133 cri.go:89] found id: ""
	I1205 07:46:22.328032  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.328040  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:46:22.328046  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:46:22.328103  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:22.379159  623133 cri.go:89] found id: ""
	I1205 07:46:22.379180  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.379189  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:46:22.379199  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:22.379258  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:22.431742  623133 cri.go:89] found id: ""
	I1205 07:46:22.431764  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.431772  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:22.431779  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:22.431838  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:22.488582  623133 cri.go:89] found id: ""
	I1205 07:46:22.488603  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.488612  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:22.488618  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:22.488677  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:22.532122  623133 cri.go:89] found id: ""
	I1205 07:46:22.532195  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.532217  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:22.532237  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:22.532324  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:22.592637  623133 cri.go:89] found id: ""
	I1205 07:46:22.592719  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.592743  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:22.592761  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:46:22.592866  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:46:22.632766  623133 cri.go:89] found id: ""
	I1205 07:46:22.632841  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.632880  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:22.632902  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:46:22.632938  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:22.688671  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:22.688743  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:22.783159  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:22.783240  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:22.802062  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:22.802087  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:22.948734  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:22.948793  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:46:22.948828  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1205 07:46:23.016579  623133 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:46:23.016687  623133 out.go:285] * 
	* 
	W1205 07:46:23.016888  623133 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:46:23.016942  623133 out.go:285] * 
	* 
	W1205 07:46:23.019270  623133 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:46:23.028967  623133 out.go:203] 
	W1205 07:46:23.032868  623133 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:46:23.032908  623133 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:46:23.032927  623133 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:46:23.036043  623133 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-421996 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-421996 version --output=json: exit status 1 (158.171479ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-05 07:46:24.0829559 +0000 UTC m=+5724.151288645
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-421996
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-421996:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b604729f1e5b8ae502170c1249f12190dc91f780ae7376315dab87a037fb753c",
	        "Created": "2025-12-05T07:33:11.033980653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:33:49.22388133Z",
	            "FinishedAt": "2025-12-05T07:33:47.852421338Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/b604729f1e5b8ae502170c1249f12190dc91f780ae7376315dab87a037fb753c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b604729f1e5b8ae502170c1249f12190dc91f780ae7376315dab87a037fb753c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b604729f1e5b8ae502170c1249f12190dc91f780ae7376315dab87a037fb753c/hosts",
	        "LogPath": "/var/lib/docker/containers/b604729f1e5b8ae502170c1249f12190dc91f780ae7376315dab87a037fb753c/b604729f1e5b8ae502170c1249f12190dc91f780ae7376315dab87a037fb753c-json.log",
	        "Name": "/kubernetes-upgrade-421996",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-421996:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-421996",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b604729f1e5b8ae502170c1249f12190dc91f780ae7376315dab87a037fb753c",
	                "LowerDir": "/var/lib/docker/overlay2/109c72b03a8cbf470696933c8826fed3baed959c91b810b2632d474e274cd994-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/109c72b03a8cbf470696933c8826fed3baed959c91b810b2632d474e274cd994/merged",
	                "UpperDir": "/var/lib/docker/overlay2/109c72b03a8cbf470696933c8826fed3baed959c91b810b2632d474e274cd994/diff",
	                "WorkDir": "/var/lib/docker/overlay2/109c72b03a8cbf470696933c8826fed3baed959c91b810b2632d474e274cd994/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-421996",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-421996/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-421996",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-421996",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-421996",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c960a047611a3c5acd4c42e7b9d39d926d766f8e3ca1a2a9f2aa657ce602e71c",
	            "SandboxKey": "/var/run/docker/netns/c960a047611a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-421996": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:58:e3:d2:9d:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "013d49ba4b40360fa5b2f8fb650d28d4495fa72571cf4b8c947541703e72c9fb",
	                    "EndpointID": "08bd16d73cf4fe5524f7f777c067c0b8d5e70cd9f28e8e5f30404ba416258b11",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-421996",
	                        "b604729f1e5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-421996 -n kubernetes-upgrade-421996
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-421996 -n kubernetes-upgrade-421996: exit status 2 (487.03788ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-421996 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-421996 logs -n 25: (1.231504549s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ delete  │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ ssh     │ -p NoKubernetes-587853 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │                     │
	│ start   │ -p missing-upgrade-168812 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-168812    │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:33 UTC │
	│ ssh     │ -p NoKubernetes-587853 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │                     │
	│ delete  │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ delete  │ -p missing-upgrade-168812                                                                                                                       │ missing-upgrade-168812    │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p kubernetes-upgrade-421996                                                                                                                    │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │                     │
	│ start   │ -p stopped-upgrade-837565 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-837565    │ jenkins │ v1.35.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ stop    │ stopped-upgrade-837565 stop                                                                                                                     │ stopped-upgrade-837565    │ jenkins │ v1.35.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p stopped-upgrade-837565 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-837565    │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:38 UTC │
	│ delete  │ -p stopped-upgrade-837565                                                                                                                       │ stopped-upgrade-837565    │ jenkins │ v1.37.0 │ 05 Dec 25 07:38 UTC │ 05 Dec 25 07:38 UTC │
	│ start   │ -p running-upgrade-685187 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-685187    │ jenkins │ v1.35.0 │ 05 Dec 25 07:38 UTC │ 05 Dec 25 07:39 UTC │
	│ start   │ -p running-upgrade-685187 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-685187    │ jenkins │ v1.37.0 │ 05 Dec 25 07:39 UTC │ 05 Dec 25 07:43 UTC │
	│ delete  │ -p running-upgrade-685187                                                                                                                       │ running-upgrade-685187    │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │ 05 Dec 25 07:43 UTC │
	│ start   │ -p pause-908773 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p pause-908773 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ pause   │ -p pause-908773 --alsologtostderr -v=5                                                                                                          │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	│ delete  │ -p pause-908773                                                                                                                                 │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p force-systemd-flag-059215 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-059215 │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:54.330965  659790 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:54.331183  659790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:54.331211  659790 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:54.331231  659790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:54.331507  659790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:45:54.331935  659790 out.go:368] Setting JSON to false
	I1205 07:45:54.332898  659790 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16082,"bootTime":1764904673,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:45:54.332992  659790 start.go:143] virtualization:  
	I1205 07:45:54.339489  659790 out.go:179] * [force-systemd-flag-059215] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:54.343095  659790 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:54.343207  659790 notify.go:221] Checking for updates...
	I1205 07:45:54.350132  659790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:54.353590  659790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:45:54.356909  659790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:45:54.360054  659790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:54.363303  659790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:54.367072  659790 config.go:182] Loaded profile config "kubernetes-upgrade-421996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:45:54.367185  659790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:54.399625  659790 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:54.399748  659790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:54.459955  659790 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:45:54.449556041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:54.460084  659790 docker.go:319] overlay module found
	I1205 07:45:54.463415  659790 out.go:179] * Using the docker driver based on user configuration
	I1205 07:45:54.466424  659790 start.go:309] selected driver: docker
	I1205 07:45:54.466443  659790 start.go:927] validating driver "docker" against <nil>
	I1205 07:45:54.466456  659790 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:54.467212  659790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:54.529239  659790 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:45:54.519901646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:54.529401  659790 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 07:45:54.529626  659790 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 07:45:54.532758  659790 out.go:179] * Using Docker driver with root privileges
	I1205 07:45:54.535726  659790 cni.go:84] Creating CNI manager for ""
	I1205 07:45:54.535807  659790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:45:54.535820  659790 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 07:45:54.535909  659790 start.go:353] cluster config:
	{Name:force-systemd-flag-059215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-059215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:54.539161  659790 out.go:179] * Starting "force-systemd-flag-059215" primary control-plane node in "force-systemd-flag-059215" cluster
	I1205 07:45:54.542215  659790 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:45:54.545263  659790 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:54.548171  659790 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:45:54.548236  659790 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1205 07:45:54.548250  659790 cache.go:65] Caching tarball of preloaded images
	I1205 07:45:54.548248  659790 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:54.548336  659790 preload.go:238] Found /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 07:45:54.548346  659790 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:45:54.548453  659790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/config.json ...
	I1205 07:45:54.548480  659790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/config.json: {Name:mk32042f65d4dc90f90ab5f4a31ebeaddf7e0a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:54.566924  659790 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:54.566945  659790 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:45:54.566959  659790 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:54.566990  659790 start.go:360] acquireMachinesLock for force-systemd-flag-059215: {Name:mka2668c31d6aee08a4d877b4f99afb03550628d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:54.567101  659790 start.go:364] duration metric: took 87.361µs to acquireMachinesLock for "force-systemd-flag-059215"
	I1205 07:45:54.567133  659790 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-059215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-059215 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:45:54.567204  659790 start.go:125] createHost starting for "" (driver="docker")
	I1205 07:45:54.570625  659790 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1205 07:45:54.570863  659790 start.go:159] libmachine.API.Create for "force-systemd-flag-059215" (driver="docker")
	I1205 07:45:54.570956  659790 client.go:173] LocalClient.Create starting
	I1205 07:45:54.571047  659790 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem
	I1205 07:45:54.571092  659790 main.go:143] libmachine: Decoding PEM data...
	I1205 07:45:54.571112  659790 main.go:143] libmachine: Parsing certificate...
	I1205 07:45:54.571178  659790 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem
	I1205 07:45:54.571208  659790 main.go:143] libmachine: Decoding PEM data...
	I1205 07:45:54.571224  659790 main.go:143] libmachine: Parsing certificate...
	I1205 07:45:54.571599  659790 cli_runner.go:164] Run: docker network inspect force-systemd-flag-059215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 07:45:54.588407  659790 cli_runner.go:211] docker network inspect force-systemd-flag-059215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 07:45:54.588493  659790 network_create.go:284] running [docker network inspect force-systemd-flag-059215] to gather additional debugging logs...
	I1205 07:45:54.588511  659790 cli_runner.go:164] Run: docker network inspect force-systemd-flag-059215
	W1205 07:45:54.604946  659790 cli_runner.go:211] docker network inspect force-systemd-flag-059215 returned with exit code 1
	I1205 07:45:54.604977  659790 network_create.go:287] error running [docker network inspect force-systemd-flag-059215]: docker network inspect force-systemd-flag-059215: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-059215 not found
	I1205 07:45:54.604990  659790 network_create.go:289] output of [docker network inspect force-systemd-flag-059215]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-059215 not found
	
	** /stderr **
	I1205 07:45:54.605091  659790 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:54.622146  659790 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e1bc6b978299 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:be:9b:c4:3d:55} reservation:<nil>}
	I1205 07:45:54.622514  659790 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0742a768fc47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:89:ef:6a:d0:94} reservation:<nil>}
	I1205 07:45:54.622730  659790 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a0bbafcf8abb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:36:32:11:e0:00:4c} reservation:<nil>}
	I1205 07:45:54.622992  659790 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-013d49ba4b40 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:30:94:aa:e4:68} reservation:<nil>}
	I1205 07:45:54.623403  659790 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a55280}
	I1205 07:45:54.623421  659790 network_create.go:124] attempt to create docker network force-systemd-flag-059215 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1205 07:45:54.623485  659790 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-059215 force-systemd-flag-059215
	I1205 07:45:54.682042  659790 network_create.go:108] docker network force-systemd-flag-059215 192.168.85.0/24 created
	I1205 07:45:54.682075  659790 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-059215" container
	I1205 07:45:54.682169  659790 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 07:45:54.699122  659790 cli_runner.go:164] Run: docker volume create force-systemd-flag-059215 --label name.minikube.sigs.k8s.io=force-systemd-flag-059215 --label created_by.minikube.sigs.k8s.io=true
	I1205 07:45:54.717524  659790 oci.go:103] Successfully created a docker volume force-systemd-flag-059215
	I1205 07:45:54.717611  659790 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-059215-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-059215 --entrypoint /usr/bin/test -v force-systemd-flag-059215:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1205 07:45:55.288393  659790 oci.go:107] Successfully prepared a docker volume force-systemd-flag-059215
	I1205 07:45:55.288458  659790 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:45:55.288469  659790 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 07:45:55.288538  659790 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-059215:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 07:45:59.325977  659790 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-059215:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.037402776s)
	I1205 07:45:59.326011  659790 kic.go:203] duration metric: took 4.037539557s to extract preloaded images to volume ...
	W1205 07:45:59.326162  659790 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 07:45:59.326286  659790 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 07:45:59.387076  659790 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-059215 --name force-systemd-flag-059215 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-059215 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-059215 --network force-systemd-flag-059215 --ip 192.168.85.2 --volume force-systemd-flag-059215:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1205 07:45:59.678649  659790 cli_runner.go:164] Run: docker container inspect force-systemd-flag-059215 --format={{.State.Running}}
	I1205 07:45:59.700856  659790 cli_runner.go:164] Run: docker container inspect force-systemd-flag-059215 --format={{.State.Status}}
	I1205 07:45:59.723854  659790 cli_runner.go:164] Run: docker exec force-systemd-flag-059215 stat /var/lib/dpkg/alternatives/iptables
	I1205 07:45:59.777809  659790 oci.go:144] the created container "force-systemd-flag-059215" has a running status.
	I1205 07:45:59.777836  659790 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa...
	I1205 07:46:00.174940  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1205 07:46:00.175005  659790 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 07:46:00.234469  659790 cli_runner.go:164] Run: docker container inspect force-systemd-flag-059215 --format={{.State.Status}}
	I1205 07:46:00.296814  659790 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 07:46:00.296837  659790 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-059215 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 07:46:00.377785  659790 cli_runner.go:164] Run: docker container inspect force-systemd-flag-059215 --format={{.State.Status}}
	I1205 07:46:00.419267  659790 machine.go:94] provisionDockerMachine start ...
	I1205 07:46:00.419388  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:00.439273  659790 main.go:143] libmachine: Using SSH client type: native
	I1205 07:46:00.439614  659790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1205 07:46:00.439631  659790 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:46:00.440357  659790 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1205 07:46:03.589999  659790 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-059215
	
	I1205 07:46:03.590022  659790 ubuntu.go:182] provisioning hostname "force-systemd-flag-059215"
	I1205 07:46:03.590096  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:03.607361  659790 main.go:143] libmachine: Using SSH client type: native
	I1205 07:46:03.607677  659790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1205 07:46:03.607693  659790 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-059215 && echo "force-systemd-flag-059215" | sudo tee /etc/hostname
	I1205 07:46:03.763675  659790 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-059215
	
	I1205 07:46:03.763764  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:03.781576  659790 main.go:143] libmachine: Using SSH client type: native
	I1205 07:46:03.781899  659790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1205 07:46:03.781922  659790 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-059215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-059215/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-059215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:46:03.930805  659790 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:46:03.930848  659790 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 07:46:03.930878  659790 ubuntu.go:190] setting up certificates
	I1205 07:46:03.930891  659790 provision.go:84] configureAuth start
	I1205 07:46:03.930977  659790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-059215
	I1205 07:46:03.950151  659790 provision.go:143] copyHostCerts
	I1205 07:46:03.950204  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 07:46:03.950238  659790 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 07:46:03.950250  659790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 07:46:03.950331  659790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 07:46:03.950446  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 07:46:03.950470  659790 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 07:46:03.950480  659790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 07:46:03.950510  659790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 07:46:03.950569  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 07:46:03.950588  659790 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 07:46:03.950598  659790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 07:46:03.950623  659790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 07:46:03.950710  659790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-059215 san=[127.0.0.1 192.168.85.2 force-systemd-flag-059215 localhost minikube]
	I1205 07:46:04.232574  659790 provision.go:177] copyRemoteCerts
	I1205 07:46:04.232668  659790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:46:04.232732  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:04.253163  659790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa Username:docker}
	I1205 07:46:04.359674  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 07:46:04.359741  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:46:04.378170  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 07:46:04.378230  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 07:46:04.397117  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 07:46:04.397228  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 07:46:04.415569  659790 provision.go:87] duration metric: took 484.650699ms to configureAuth
	I1205 07:46:04.415601  659790 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:46:04.415796  659790 config.go:182] Loaded profile config "force-systemd-flag-059215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:46:04.415913  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:04.433589  659790 main.go:143] libmachine: Using SSH client type: native
	I1205 07:46:04.433908  659790 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1205 07:46:04.433947  659790 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:46:04.727642  659790 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:46:04.727687  659790 machine.go:97] duration metric: took 4.308397715s to provisionDockerMachine
	I1205 07:46:04.727699  659790 client.go:176] duration metric: took 10.156731511s to LocalClient.Create
	I1205 07:46:04.727723  659790 start.go:167] duration metric: took 10.156856009s to libmachine.API.Create "force-systemd-flag-059215"
	I1205 07:46:04.727736  659790 start.go:293] postStartSetup for "force-systemd-flag-059215" (driver="docker")
	I1205 07:46:04.727747  659790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:46:04.727824  659790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:46:04.727871  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:04.745839  659790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa Username:docker}
	I1205 07:46:04.850499  659790 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:46:04.853893  659790 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:46:04.853921  659790 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:46:04.853938  659790 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 07:46:04.853996  659790 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 07:46:04.854080  659790 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 07:46:04.854088  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /etc/ssl/certs/4441472.pem
	I1205 07:46:04.854191  659790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:46:04.862028  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:46:04.880248  659790 start.go:296] duration metric: took 152.497143ms for postStartSetup
	I1205 07:46:04.880629  659790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-059215
	I1205 07:46:04.897973  659790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/config.json ...
	I1205 07:46:04.898267  659790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:46:04.898321  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:04.915254  659790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa Username:docker}
	I1205 07:46:05.016115  659790 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:46:05.021497  659790 start.go:128] duration metric: took 10.45427672s to createHost
	I1205 07:46:05.021540  659790 start.go:83] releasing machines lock for "force-systemd-flag-059215", held for 10.454423387s
	I1205 07:46:05.021629  659790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-059215
	I1205 07:46:05.038726  659790 ssh_runner.go:195] Run: cat /version.json
	I1205 07:46:05.038790  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:05.039025  659790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:46:05.039094  659790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-059215
	I1205 07:46:05.061785  659790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa Username:docker}
	I1205 07:46:05.063233  659790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/force-systemd-flag-059215/id_rsa Username:docker}
	I1205 07:46:05.162275  659790 ssh_runner.go:195] Run: systemctl --version
	I1205 07:46:05.253130  659790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:46:05.289698  659790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:46:05.294131  659790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:46:05.294234  659790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:46:05.322555  659790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1205 07:46:05.322589  659790 start.go:496] detecting cgroup driver to use...
	I1205 07:46:05.322604  659790 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1205 07:46:05.322670  659790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:46:05.341207  659790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:46:05.354010  659790 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:46:05.354078  659790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:46:05.372623  659790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:46:05.391423  659790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:46:05.505498  659790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:46:05.654660  659790 docker.go:234] disabling docker service ...
	I1205 07:46:05.654781  659790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:46:05.675681  659790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:46:05.689286  659790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:46:05.802275  659790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:46:05.920735  659790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:46:05.934178  659790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:46:05.948747  659790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:46:05.948813  659790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:46:05.957622  659790 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1205 07:46:05.957717  659790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:46:05.966654  659790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:46:05.975183  659790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:46:05.983954  659790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:46:05.991400  659790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:46:05.999776  659790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:46:06.016762  659790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:46:06.026481  659790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:46:06.034484  659790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:46:06.042049  659790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:46:06.148228  659790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:46:06.313922  659790 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:46:06.314074  659790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:46:06.319922  659790 start.go:564] Will wait 60s for crictl version
	I1205 07:46:06.320035  659790 ssh_runner.go:195] Run: which crictl
	I1205 07:46:06.324130  659790 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:46:06.358296  659790 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:46:06.358465  659790 ssh_runner.go:195] Run: crio --version
	I1205 07:46:06.386486  659790 ssh_runner.go:195] Run: crio --version
	I1205 07:46:06.422325  659790 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 07:46:06.425205  659790 cli_runner.go:164] Run: docker network inspect force-systemd-flag-059215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:46:06.441369  659790 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:46:06.445156  659790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:46:06.455009  659790 kubeadm.go:884] updating cluster {Name:force-systemd-flag-059215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-059215 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:46:06.455143  659790 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:46:06.455207  659790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:46:06.491247  659790 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:46:06.491272  659790 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:46:06.491333  659790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:46:06.520808  659790 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:46:06.520838  659790 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:46:06.520846  659790 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1205 07:46:06.520945  659790 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-059215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-059215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:46:06.521030  659790 ssh_runner.go:195] Run: crio config
	I1205 07:46:06.581398  659790 cni.go:84] Creating CNI manager for ""
	I1205 07:46:06.581422  659790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:46:06.581439  659790 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:46:06.581461  659790 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-059215 NodeName:force-systemd-flag-059215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:46:06.581584  659790 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-059215"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:46:06.581674  659790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:46:06.589332  659790 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:46:06.589398  659790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:46:06.596929  659790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1205 07:46:06.609324  659790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:46:06.621775  659790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1205 07:46:06.634451  659790 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:46:06.637903  659790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 07:46:06.647833  659790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:46:06.763850  659790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:46:06.779622  659790 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215 for IP: 192.168.85.2
	I1205 07:46:06.779694  659790 certs.go:195] generating shared ca certs ...
	I1205 07:46:06.779724  659790 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:46:06.779910  659790 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 07:46:06.780008  659790 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 07:46:06.780031  659790 certs.go:257] generating profile certs ...
	I1205 07:46:06.780118  659790 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/client.key
	I1205 07:46:06.780153  659790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/client.crt with IP's: []
	I1205 07:46:06.914784  659790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/client.crt ...
	I1205 07:46:06.914818  659790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/client.crt: {Name:mkccbbfea1f5ef0897130a1b48ff8fce2ad0c8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:46:06.915024  659790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/client.key ...
	I1205 07:46:06.915040  659790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/client.key: {Name:mkf467ab065f7065cb55ddb98e269bfe0cbf3dea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:46:06.915142  659790 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.key.67106ea7
	I1205 07:46:06.915165  659790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.crt.67106ea7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1205 07:46:07.397699  659790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.crt.67106ea7 ...
	I1205 07:46:07.397729  659790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.crt.67106ea7: {Name:mkf0c9cc272c143fbc9d78098dbb1cb19e1b469b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:46:07.397911  659790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.key.67106ea7 ...
	I1205 07:46:07.397920  659790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.key.67106ea7: {Name:mk1f4c7720cd3553c4bbc78e4a6129c60ba77d39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:46:07.398005  659790 certs.go:382] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.crt.67106ea7 -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.crt
	I1205 07:46:07.398079  659790 certs.go:386] copying /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.key.67106ea7 -> /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.key
	I1205 07:46:07.398132  659790 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.key
	I1205 07:46:07.398145  659790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.crt with IP's: []
	I1205 07:46:07.666351  659790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.crt ...
	I1205 07:46:07.666392  659790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.crt: {Name:mk1614cee9ab8849a9296a19546018292cb34224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:46:07.666576  659790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.key ...
	I1205 07:46:07.666591  659790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.key: {Name:mk24ea8f9da54c0ddd77b44bc9feaee18a83bd9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:46:07.666676  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 07:46:07.666696  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 07:46:07.666708  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 07:46:07.666724  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 07:46:07.666742  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 07:46:07.666758  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 07:46:07.666776  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 07:46:07.666794  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 07:46:07.666847  659790 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 07:46:07.666893  659790 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 07:46:07.666906  659790 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:46:07.666936  659790 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:46:07.666966  659790 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:46:07.666995  659790 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 07:46:07.667044  659790 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:46:07.667079  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> /usr/share/ca-certificates/4441472.pem
	I1205 07:46:07.667097  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:46:07.667108  659790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem -> /usr/share/ca-certificates/444147.pem
	I1205 07:46:07.667628  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:46:07.684857  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:46:07.701857  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:46:07.720228  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:46:07.738285  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 07:46:07.755227  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:46:07.774262  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:46:07.795174  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/force-systemd-flag-059215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:46:07.815111  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 07:46:07.833131  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:46:07.850785  659790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 07:46:07.870138  659790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:46:07.883596  659790 ssh_runner.go:195] Run: openssl version
	I1205 07:46:07.890055  659790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 07:46:07.898056  659790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 07:46:07.905790  659790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 07:46:07.909492  659790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 07:46:07.909613  659790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 07:46:07.950453  659790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:46:07.957822  659790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4441472.pem /etc/ssl/certs/3ec20f2e.0
	I1205 07:46:07.965107  659790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:46:07.972656  659790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:46:07.980212  659790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:46:07.983958  659790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:46:07.984095  659790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:46:08.025038  659790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:46:08.033056  659790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1205 07:46:08.040895  659790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 07:46:08.048917  659790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 07:46:08.056780  659790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 07:46:08.060739  659790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 07:46:08.060828  659790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 07:46:08.107089  659790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:46:08.114339  659790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/444147.pem /etc/ssl/certs/51391683.0
	I1205 07:46:08.121600  659790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:46:08.125101  659790 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 07:46:08.125157  659790 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-059215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-059215 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:46:08.125238  659790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:46:08.125301  659790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:46:08.152006  659790 cri.go:89] found id: ""
	I1205 07:46:08.152083  659790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:46:08.159930  659790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 07:46:08.167628  659790 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1205 07:46:08.167695  659790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 07:46:08.175354  659790 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 07:46:08.175423  659790 kubeadm.go:158] found existing configuration files:
	
	I1205 07:46:08.175510  659790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 07:46:08.183299  659790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 07:46:08.183374  659790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 07:46:08.190642  659790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 07:46:08.198299  659790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 07:46:08.198360  659790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 07:46:08.205760  659790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 07:46:08.213565  659790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 07:46:08.213671  659790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 07:46:08.221048  659790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 07:46:08.228651  659790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 07:46:08.228756  659790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 07:46:08.235911  659790 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 07:46:08.294128  659790 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1205 07:46:08.294589  659790 kubeadm.go:319] [preflight] Running pre-flight checks
	I1205 07:46:08.322326  659790 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1205 07:46:08.322480  659790 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1205 07:46:08.322547  659790 kubeadm.go:319] OS: Linux
	I1205 07:46:08.322615  659790 kubeadm.go:319] CGROUPS_CPU: enabled
	I1205 07:46:08.322700  659790 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1205 07:46:08.322773  659790 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1205 07:46:08.322846  659790 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1205 07:46:08.322920  659790 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1205 07:46:08.322997  659790 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1205 07:46:08.323100  659790 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1205 07:46:08.323184  659790 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1205 07:46:08.323267  659790 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1205 07:46:08.400043  659790 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 07:46:08.400228  659790 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 07:46:08.400368  659790 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 07:46:08.408028  659790 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 07:46:08.414499  659790 out.go:252]   - Generating certificates and keys ...
	I1205 07:46:08.414657  659790 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1205 07:46:08.414758  659790 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1205 07:46:08.598888  659790 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 07:46:09.022757  659790 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1205 07:46:11.042442  659790 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1205 07:46:12.462564  659790 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1205 07:46:12.701912  659790 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1205 07:46:12.702303  659790 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-059215 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:46:13.154833  659790 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1205 07:46:13.155200  659790 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-059215 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1205 07:46:13.507087  659790 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 07:46:13.901722  659790 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 07:46:13.948156  659790 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1205 07:46:13.948408  659790 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 07:46:14.143783  659790 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 07:46:14.703837  659790 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 07:46:15.462453  659790 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 07:46:15.603239  659790 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 07:46:17.819068  659790 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 07:46:17.819803  659790 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 07:46:17.822798  659790 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 07:46:17.826229  659790 out.go:252]   - Booting up control plane ...
	I1205 07:46:17.826360  659790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 07:46:17.826484  659790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 07:46:17.827193  659790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 07:46:17.842790  659790 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 07:46:17.843186  659790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1205 07:46:17.851974  659790 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1205 07:46:17.853394  659790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 07:46:17.853808  659790 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1205 07:46:17.990860  659790 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 07:46:17.990982  659790 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 07:46:18.987371  659790 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001670886s
	I1205 07:46:18.990868  659790 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1205 07:46:18.990961  659790 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1205 07:46:18.991276  659790 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1205 07:46:18.991366  659790 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1205 07:46:22.216452  623133 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001059422s
	I1205 07:46:22.216484  623133 kubeadm.go:319] 
	I1205 07:46:22.216539  623133 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1205 07:46:22.216570  623133 kubeadm.go:319] 	- The kubelet is not running
	I1205 07:46:22.216668  623133 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 07:46:22.216674  623133 kubeadm.go:319] 
	I1205 07:46:22.216772  623133 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 07:46:22.216802  623133 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1205 07:46:22.216832  623133 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1205 07:46:22.216835  623133 kubeadm.go:319] 
	I1205 07:46:22.221347  623133 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1205 07:46:22.221774  623133 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1205 07:46:22.221881  623133 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 07:46:22.222153  623133 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1205 07:46:22.222160  623133 kubeadm.go:319] 
	I1205 07:46:22.222228  623133 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1205 07:46:22.222279  623133 kubeadm.go:403] duration metric: took 12m7.744210771s to StartCluster
	I1205 07:46:22.222312  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 07:46:22.222371  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 07:46:22.285575  623133 cri.go:89] found id: ""
	I1205 07:46:22.285598  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.285606  623133 logs.go:284] No container was found matching "kube-apiserver"
	I1205 07:46:22.285612  623133 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 07:46:22.285677  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 07:46:22.328010  623133 cri.go:89] found id: ""
	I1205 07:46:22.328032  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.328040  623133 logs.go:284] No container was found matching "etcd"
	I1205 07:46:22.328046  623133 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 07:46:22.328103  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 07:46:22.379159  623133 cri.go:89] found id: ""
	I1205 07:46:22.379180  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.379189  623133 logs.go:284] No container was found matching "coredns"
	I1205 07:46:22.379199  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 07:46:22.379258  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 07:46:22.431742  623133 cri.go:89] found id: ""
	I1205 07:46:22.431764  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.431772  623133 logs.go:284] No container was found matching "kube-scheduler"
	I1205 07:46:22.431779  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 07:46:22.431838  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 07:46:22.488582  623133 cri.go:89] found id: ""
	I1205 07:46:22.488603  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.488612  623133 logs.go:284] No container was found matching "kube-proxy"
	I1205 07:46:22.488618  623133 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 07:46:22.488677  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 07:46:22.532122  623133 cri.go:89] found id: ""
	I1205 07:46:22.532195  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.532217  623133 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 07:46:22.532237  623133 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 07:46:22.532324  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 07:46:22.592637  623133 cri.go:89] found id: ""
	I1205 07:46:22.592719  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.592743  623133 logs.go:284] No container was found matching "kindnet"
	I1205 07:46:22.592761  623133 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 07:46:22.592866  623133 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 07:46:22.632766  623133 cri.go:89] found id: ""
	I1205 07:46:22.632841  623133 logs.go:282] 0 containers: []
	W1205 07:46:22.632880  623133 logs.go:284] No container was found matching "storage-provisioner"
	I1205 07:46:22.632902  623133 logs.go:123] Gathering logs for container status ...
	I1205 07:46:22.632938  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 07:46:22.688671  623133 logs.go:123] Gathering logs for kubelet ...
	I1205 07:46:22.688743  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 07:46:22.783159  623133 logs.go:123] Gathering logs for dmesg ...
	I1205 07:46:22.783240  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 07:46:22.802062  623133 logs.go:123] Gathering logs for describe nodes ...
	I1205 07:46:22.802087  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 07:46:22.948734  623133 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 07:46:22.948793  623133 logs.go:123] Gathering logs for CRI-O ...
	I1205 07:46:22.948828  623133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1205 07:46:23.016579  623133 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1205 07:46:23.016687  623133 out.go:285] * 
	W1205 07:46:23.016888  623133 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:46:23.016942  623133 out.go:285] * 
	W1205 07:46:23.019270  623133 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:46:23.028967  623133 out.go:203] 
	W1205 07:46:23.032868  623133 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001059422s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 07:46:23.032908  623133 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 07:46:23.032927  623133 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 07:46:23.036043  623133 out.go:203] 
	I1205 07:46:21.691887  659790 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.700705822s
	
	
	==> CRI-O <==
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.571141608Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=fd814132-5b20-4824-9aa4-3c0bce45bf10 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.583486785Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4054f156-8b53-4737-af21-837d33becfa2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.583638155Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-beta.0 not found" id=4054f156-8b53-4737-af21-837d33becfa2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.583689906Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-beta.0 found" id=4054f156-8b53-4737-af21-837d33becfa2 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.59293518Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=42e91841-cce1-4ac2-a050-42ce6c02e1e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.59309172Z" level=info msg="Image registry.k8s.io/etcd:3.6.5-0 not found" id=42e91841-cce1-4ac2-a050-42ce6c02e1e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.593144988Z" level=info msg="Neither image nor artfiact registry.k8s.io/etcd:3.6.5-0 found" id=42e91841-cce1-4ac2-a050-42ce6c02e1e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.601885018Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=3fd911a1-2bd4-4427-b7cd-79d831f9b3b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.602033565Z" level=info msg="Image registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 not found" id=3fd911a1-2bd4-4427-b7cd-79d831f9b3b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:33:58 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:33:58.602069898Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 found" id=3fd911a1-2bd4-4427-b7cd-79d831f9b3b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:34:02 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:34:02.070827031Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=72a7824d-31af-48a9-a998-ad37dd1c58cf name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:38:18 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:38:18.937685314Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=eb0f3a39-9dbb-448f-9590-e71409aaf7b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:38:18 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:38:18.94627011Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f8bf55ed-c1d3-42c8-8791-f064143a3acb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:38:18 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:38:18.948033496Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=677e09f2-c1ee-452e-9efe-d35a62a4feed name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:38:18 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:38:18.951263813Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ce687954-a169-4211-aa0c-29f1e9da1315 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:38:18 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:38:18.952285585Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=85c806bd-460b-4d15-9107-05e9767735a9 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:38:18 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:38:18.953728122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9da5fa22-6a2b-4abb-b329-a83ce1f6365f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:38:18 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:38:18.955918032Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=90a3bb5d-3850-4fa0-8a98-1c03ca9a350c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:42:20 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:42:20.676652271Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=23c92e7c-0e73-4388-8f8a-6d4119dd1319 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:42:20 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:42:20.678269677Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=22f2da2f-ce1c-45b6-9ecb-64a9f156d270 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:42:20 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:42:20.679748194Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=49fb83d1-83c0-4c54-a779-f598321043db name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:42:20 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:42:20.681187941Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d2387466-b3c8-4b0c-b482-b96c93f42649 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:42:20 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:42:20.682057228Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=b2d13bb0-90bd-4a63-a7af-cd626ec99086 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:42:20 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:42:20.683409122Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=89669703-5125-444f-804c-ec4ea1376232 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 07:42:20 kubernetes-upgrade-421996 crio[614]: time="2025-12-05T07:42:20.684217526Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=8a5e71b8-6d09-4452-9908-f5c540db9c3a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 07:10] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:11] overlayfs: idmapped layers are currently not supported
	[  +3.073089] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:12] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:13] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:14] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:19] overlayfs: idmapped layers are currently not supported
	[ +33.161652] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:21] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:22] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:23] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:24] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:25] overlayfs: idmapped layers are currently not supported
	[ +19.047599] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:26] overlayfs: idmapped layers are currently not supported
	[ +16.337115] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:27] overlayfs: idmapped layers are currently not supported
	[ +25.534355] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:28] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:30] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:32] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:33] overlayfs: idmapped layers are currently not supported
	[ +28.256020] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:44] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:46] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:46:25 up  4:28,  0 user,  load average: 2.20, 1.73, 1.83
	Linux kubernetes-upgrade-421996 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 05 07:46:22 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:46:23 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 05 07:46:23 kubernetes-upgrade-421996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:46:23 kubernetes-upgrade-421996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:46:23 kubernetes-upgrade-421996 kubelet[12904]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:46:23 kubernetes-upgrade-421996 kubelet[12904]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:46:23 kubernetes-upgrade-421996 kubelet[12904]: E1205 07:46:23.929047   12904 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:46:23 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:46:23 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:46:24 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 05 07:46:24 kubernetes-upgrade-421996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:46:24 kubernetes-upgrade-421996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:46:24 kubernetes-upgrade-421996 kubelet[12925]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:46:24 kubernetes-upgrade-421996 kubelet[12925]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:46:24 kubernetes-upgrade-421996 kubelet[12925]: E1205 07:46:24.872264   12925 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:46:24 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:46:24 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 07:46:25 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 05 07:46:25 kubernetes-upgrade-421996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:46:25 kubernetes-upgrade-421996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 05 07:46:25 kubernetes-upgrade-421996 kubelet[13010]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:46:25 kubernetes-upgrade-421996 kubelet[13010]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 05 07:46:25 kubernetes-upgrade-421996 kubelet[13010]: E1205 07:46:25.639774   13010 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 05 07:46:25 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 05 07:46:25 kubernetes-upgrade-421996 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-421996 -n kubernetes-upgrade-421996
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-421996 -n kubernetes-upgrade-421996: exit status 2 (486.105356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-421996" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-421996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-421996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-421996: (2.507449322s)
--- FAIL: TestKubernetesUpgrade (804.43s)

                                                
                                    
x
+
TestPause/serial/Pause (6.21s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-908773 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-908773 --alsologtostderr -v=5: exit status 80 (1.717189537s)

                                                
                                                
-- stdout --
	* Pausing node pause-908773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:45:45.661494  658393 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:45.662973  658393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:45.663008  658393 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:45.663045  658393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:45.663786  658393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:45:45.664286  658393 out.go:368] Setting JSON to false
	I1205 07:45:45.664391  658393 mustload.go:66] Loading cluster: pause-908773
	I1205 07:45:45.665169  658393 config.go:182] Loaded profile config "pause-908773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:45:45.665896  658393 cli_runner.go:164] Run: docker container inspect pause-908773 --format={{.State.Status}}
	I1205 07:45:45.683707  658393 host.go:66] Checking if "pause-908773" exists ...
	I1205 07:45:45.684047  658393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:45.758556  658393 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-05 07:45:45.74879869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:45.759185  658393 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-908773 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1205 07:45:45.763972  658393 out.go:179] * Pausing node pause-908773 ... 
	I1205 07:45:45.766827  658393 host.go:66] Checking if "pause-908773" exists ...
	I1205 07:45:45.767188  658393 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:45.767239  658393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:45.785071  658393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:45.893267  658393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:45:45.906177  658393 pause.go:52] kubelet running: true
	I1205 07:45:45.906245  658393 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:45:46.129861  658393 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:45:46.129974  658393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:45:46.198913  658393 cri.go:89] found id: "42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6"
	I1205 07:45:46.198936  658393 cri.go:89] found id: "bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3"
	I1205 07:45:46.198940  658393 cri.go:89] found id: "1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b"
	I1205 07:45:46.198944  658393 cri.go:89] found id: "c1f9ec37476718129f8a3378815dbe24785ceca8224b5337bb73412c8c2f5294"
	I1205 07:45:46.198947  658393 cri.go:89] found id: "d094a8686e5ba0f3ec3a732e3a223076a61da4f5dd90c62847f3e01829326cb3"
	I1205 07:45:46.198950  658393 cri.go:89] found id: "85fa8e67c22567024fa7b705e2ed7964a9445424ae54ee1525dc8ba5fb4a3a6a"
	I1205 07:45:46.198959  658393 cri.go:89] found id: "b0092a97b6fe65c8ffaef671961badb67a3e4d6d62d80c42d2051994069dcaae"
	I1205 07:45:46.198962  658393 cri.go:89] found id: "a2d60339bcc9865d682e8362f647ffc1c462b4e38c92889a3d0cd42a17e90ea1"
	I1205 07:45:46.198965  658393 cri.go:89] found id: "d68733d7f972f0ded1c6971d99d99c7001a5e4f6aa595c25786f1f8bfe47cb97"
	I1205 07:45:46.198973  658393 cri.go:89] found id: "ac3860d5f81074887386ff7e8edda702dca82f8d5e9082dbc0dfd651a6eea9e7"
	I1205 07:45:46.198976  658393 cri.go:89] found id: "497dce47fc69651d8415137bdc4063272942fd82f265e98f60f6540dbc963f2f"
	I1205 07:45:46.198979  658393 cri.go:89] found id: "00c4de126b652c2eeabcc39993379dbbb81479999412ec63e2f1384ef3779896"
	I1205 07:45:46.198982  658393 cri.go:89] found id: "fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2"
	I1205 07:45:46.198985  658393 cri.go:89] found id: "9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623"
	I1205 07:45:46.198989  658393 cri.go:89] found id: ""
	I1205 07:45:46.199045  658393 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:45:46.209821  658393 retry.go:31] will retry after 172.176008ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:45:46Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:45:46.382249  658393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:45:46.395191  658393 pause.go:52] kubelet running: false
	I1205 07:45:46.395254  658393 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:45:46.531530  658393 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:45:46.531617  658393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:45:46.596768  658393 cri.go:89] found id: "42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6"
	I1205 07:45:46.596794  658393 cri.go:89] found id: "bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3"
	I1205 07:45:46.596799  658393 cri.go:89] found id: "1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b"
	I1205 07:45:46.596803  658393 cri.go:89] found id: "c1f9ec37476718129f8a3378815dbe24785ceca8224b5337bb73412c8c2f5294"
	I1205 07:45:46.596807  658393 cri.go:89] found id: "d094a8686e5ba0f3ec3a732e3a223076a61da4f5dd90c62847f3e01829326cb3"
	I1205 07:45:46.596811  658393 cri.go:89] found id: "85fa8e67c22567024fa7b705e2ed7964a9445424ae54ee1525dc8ba5fb4a3a6a"
	I1205 07:45:46.596814  658393 cri.go:89] found id: "b0092a97b6fe65c8ffaef671961badb67a3e4d6d62d80c42d2051994069dcaae"
	I1205 07:45:46.596817  658393 cri.go:89] found id: "a2d60339bcc9865d682e8362f647ffc1c462b4e38c92889a3d0cd42a17e90ea1"
	I1205 07:45:46.596820  658393 cri.go:89] found id: "d68733d7f972f0ded1c6971d99d99c7001a5e4f6aa595c25786f1f8bfe47cb97"
	I1205 07:45:46.596826  658393 cri.go:89] found id: "ac3860d5f81074887386ff7e8edda702dca82f8d5e9082dbc0dfd651a6eea9e7"
	I1205 07:45:46.596830  658393 cri.go:89] found id: "497dce47fc69651d8415137bdc4063272942fd82f265e98f60f6540dbc963f2f"
	I1205 07:45:46.596833  658393 cri.go:89] found id: "00c4de126b652c2eeabcc39993379dbbb81479999412ec63e2f1384ef3779896"
	I1205 07:45:46.596836  658393 cri.go:89] found id: "fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2"
	I1205 07:45:46.596844  658393 cri.go:89] found id: "9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623"
	I1205 07:45:46.596847  658393 cri.go:89] found id: ""
	I1205 07:45:46.596897  658393 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:45:46.608075  658393 retry.go:31] will retry after 473.162425ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:45:46Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:45:47.081797  658393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:45:47.094677  658393 pause.go:52] kubelet running: false
	I1205 07:45:47.094764  658393 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1205 07:45:47.229822  658393 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1205 07:45:47.229918  658393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1205 07:45:47.293134  658393 cri.go:89] found id: "42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6"
	I1205 07:45:47.293215  658393 cri.go:89] found id: "bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3"
	I1205 07:45:47.293236  658393 cri.go:89] found id: "1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b"
	I1205 07:45:47.293253  658393 cri.go:89] found id: "c1f9ec37476718129f8a3378815dbe24785ceca8224b5337bb73412c8c2f5294"
	I1205 07:45:47.293272  658393 cri.go:89] found id: "d094a8686e5ba0f3ec3a732e3a223076a61da4f5dd90c62847f3e01829326cb3"
	I1205 07:45:47.293295  658393 cri.go:89] found id: "85fa8e67c22567024fa7b705e2ed7964a9445424ae54ee1525dc8ba5fb4a3a6a"
	I1205 07:45:47.293313  658393 cri.go:89] found id: "b0092a97b6fe65c8ffaef671961badb67a3e4d6d62d80c42d2051994069dcaae"
	I1205 07:45:47.293330  658393 cri.go:89] found id: "a2d60339bcc9865d682e8362f647ffc1c462b4e38c92889a3d0cd42a17e90ea1"
	I1205 07:45:47.293346  658393 cri.go:89] found id: "d68733d7f972f0ded1c6971d99d99c7001a5e4f6aa595c25786f1f8bfe47cb97"
	I1205 07:45:47.293377  658393 cri.go:89] found id: "ac3860d5f81074887386ff7e8edda702dca82f8d5e9082dbc0dfd651a6eea9e7"
	I1205 07:45:47.293401  658393 cri.go:89] found id: "497dce47fc69651d8415137bdc4063272942fd82f265e98f60f6540dbc963f2f"
	I1205 07:45:47.293417  658393 cri.go:89] found id: "00c4de126b652c2eeabcc39993379dbbb81479999412ec63e2f1384ef3779896"
	I1205 07:45:47.293432  658393 cri.go:89] found id: "fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2"
	I1205 07:45:47.293450  658393 cri.go:89] found id: "9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623"
	I1205 07:45:47.293472  658393 cri.go:89] found id: ""
	I1205 07:45:47.293542  658393 ssh_runner.go:195] Run: sudo runc list -f json
	I1205 07:45:47.308053  658393 out.go:203] 
	W1205 07:45:47.311018  658393 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:45:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:45:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1205 07:45:47.311041  658393 out.go:285] * 
	* 
	W1205 07:45:47.318052  658393 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 07:45:47.322941  658393 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-908773 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-908773
helpers_test.go:243: (dbg) docker inspect pause-908773:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de",
	        "Created": "2025-12-05T07:44:05.178977974Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 654556,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:44:05.245204233Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/hostname",
	        "HostsPath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/hosts",
	        "LogPath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de-json.log",
	        "Name": "/pause-908773",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-908773:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-908773",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de",
	                "LowerDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-908773",
	                "Source": "/var/lib/docker/volumes/pause-908773/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-908773",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-908773",
	                "name.minikube.sigs.k8s.io": "pause-908773",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f07ad93579a48743e1636244c3580a6795093af817ed5b3130acfe5bba204569",
	            "SandboxKey": "/var/run/docker/netns/f07ad93579a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-908773": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:7c:29:1f:b2:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67fcaa158c8d7e5509bd17a7693b9a7f5caaad1d628c6c36388cca528c02cf2f",
	                    "EndpointID": "3281418e267bf62db907be1047490b2734749fa8ff65e7218c47892b2579700f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-908773",
	                        "edf442d4a4e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-908773 -n pause-908773
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-908773 -n pause-908773: exit status 2 (335.099439ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-908773 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-908773 logs -n 25: (1.374300191s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-587853 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p missing-upgrade-168812 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-168812    │ jenkins │ v1.35.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ delete  │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ ssh     │ -p NoKubernetes-587853 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │                     │
	│ start   │ -p missing-upgrade-168812 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-168812    │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:33 UTC │
	│ ssh     │ -p NoKubernetes-587853 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │                     │
	│ delete  │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ delete  │ -p missing-upgrade-168812                                                                                                                       │ missing-upgrade-168812    │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p kubernetes-upgrade-421996                                                                                                                    │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │                     │
	│ start   │ -p stopped-upgrade-837565 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-837565    │ jenkins │ v1.35.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ stop    │ stopped-upgrade-837565 stop                                                                                                                     │ stopped-upgrade-837565    │ jenkins │ v1.35.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p stopped-upgrade-837565 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-837565    │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:38 UTC │
	│ delete  │ -p stopped-upgrade-837565                                                                                                                       │ stopped-upgrade-837565    │ jenkins │ v1.37.0 │ 05 Dec 25 07:38 UTC │ 05 Dec 25 07:38 UTC │
	│ start   │ -p running-upgrade-685187 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-685187    │ jenkins │ v1.35.0 │ 05 Dec 25 07:38 UTC │ 05 Dec 25 07:39 UTC │
	│ start   │ -p running-upgrade-685187 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-685187    │ jenkins │ v1.37.0 │ 05 Dec 25 07:39 UTC │ 05 Dec 25 07:43 UTC │
	│ delete  │ -p running-upgrade-685187                                                                                                                       │ running-upgrade-685187    │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │ 05 Dec 25 07:43 UTC │
	│ start   │ -p pause-908773 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p pause-908773 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ pause   │ -p pause-908773 --alsologtostderr -v=5                                                                                                          │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:16.112177  657081 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:16.112357  657081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:16.112389  657081 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:16.112411  657081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:16.112817  657081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:45:16.113446  657081 out.go:368] Setting JSON to false
	I1205 07:45:16.115020  657081 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16044,"bootTime":1764904673,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:45:16.115104  657081 start.go:143] virtualization:  
	I1205 07:45:16.118664  657081 out.go:179] * [pause-908773] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:16.122371  657081 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:16.122630  657081 notify.go:221] Checking for updates...
	I1205 07:45:16.128211  657081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:16.131023  657081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:45:16.133821  657081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:45:16.136721  657081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:16.139696  657081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:16.148692  657081 config.go:182] Loaded profile config "pause-908773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:45:16.149370  657081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:16.178743  657081 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:16.178987  657081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:16.241063  657081 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-05 07:45:16.231834991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:16.241170  657081 docker.go:319] overlay module found
	I1205 07:45:16.244356  657081 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:16.247229  657081 start.go:309] selected driver: docker
	I1205 07:45:16.247266  657081 start.go:927] validating driver "docker" against &{Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:16.247380  657081 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:16.247479  657081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:16.319611  657081 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-05 07:45:16.30908936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:16.320020  657081 cni.go:84] Creating CNI manager for ""
	I1205 07:45:16.320084  657081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:45:16.320129  657081 start.go:353] cluster config:
	{Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:16.325304  657081 out.go:179] * Starting "pause-908773" primary control-plane node in "pause-908773" cluster
	I1205 07:45:16.328081  657081 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:45:16.330940  657081 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:16.333710  657081 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:45:16.333754  657081 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1205 07:45:16.333764  657081 cache.go:65] Caching tarball of preloaded images
	I1205 07:45:16.333799  657081 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:16.333853  657081 preload.go:238] Found /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 07:45:16.333863  657081 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:45:16.334010  657081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/config.json ...
	I1205 07:45:16.359773  657081 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:16.359792  657081 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:45:16.359815  657081 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:16.359845  657081 start.go:360] acquireMachinesLock for pause-908773: {Name:mk8eb780af7305b8a0daa8238dc7d1e4fe5cbafe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:16.359909  657081 start.go:364] duration metric: took 47.156µs to acquireMachinesLock for "pause-908773"
	I1205 07:45:16.359929  657081 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:16.359934  657081 fix.go:54] fixHost starting: 
	I1205 07:45:16.360186  657081 cli_runner.go:164] Run: docker container inspect pause-908773 --format={{.State.Status}}
	I1205 07:45:16.378200  657081 fix.go:112] recreateIfNeeded on pause-908773: state=Running err=<nil>
	W1205 07:45:16.378229  657081 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:45:16.381374  657081 out.go:252] * Updating the running docker "pause-908773" container ...
	I1205 07:45:16.381412  657081 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:16.381492  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:16.401053  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:16.401380  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:16.401397  657081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:16.551265  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-908773
	
	I1205 07:45:16.551293  657081 ubuntu.go:182] provisioning hostname "pause-908773"
	I1205 07:45:16.551360  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:16.580932  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:16.581240  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:16.581256  657081 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-908773 && echo "pause-908773" | sudo tee /etc/hostname
	I1205 07:45:16.739891  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-908773
	
	I1205 07:45:16.739967  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:16.757607  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:16.757966  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:16.757987  657081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-908773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-908773/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-908773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:16.906956  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:16.906980  657081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 07:45:16.907004  657081 ubuntu.go:190] setting up certificates
	I1205 07:45:16.907014  657081 provision.go:84] configureAuth start
	I1205 07:45:16.907115  657081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-908773
	I1205 07:45:16.925748  657081 provision.go:143] copyHostCerts
	I1205 07:45:16.925822  657081 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 07:45:16.925837  657081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 07:45:16.925913  657081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 07:45:16.926066  657081 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 07:45:16.926073  657081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 07:45:16.926101  657081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 07:45:16.926150  657081 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 07:45:16.926155  657081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 07:45:16.926176  657081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 07:45:16.926219  657081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.pause-908773 san=[127.0.0.1 192.168.85.2 localhost minikube pause-908773]
	I1205 07:45:17.321019  657081 provision.go:177] copyRemoteCerts
	I1205 07:45:17.321118  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:17.321181  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:17.348773  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:17.454271  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:45:17.472792  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 07:45:17.489837  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:45:17.510134  657081 provision.go:87] duration metric: took 603.079484ms to configureAuth
	I1205 07:45:17.510165  657081 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:17.510470  657081 config.go:182] Loaded profile config "pause-908773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:45:17.510593  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:17.528626  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:17.528947  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:17.528968  657081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:45:22.904821  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:45:22.904847  657081 machine.go:97] duration metric: took 6.523426186s to provisionDockerMachine
	I1205 07:45:22.904858  657081 start.go:293] postStartSetup for "pause-908773" (driver="docker")
	I1205 07:45:22.904869  657081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:22.904929  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:22.904991  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:22.923596  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.026462  657081 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:23.030061  657081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:23.030092  657081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:23.030104  657081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 07:45:23.030203  657081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 07:45:23.030289  657081 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 07:45:23.030428  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:23.038278  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:45:23.058226  657081 start.go:296] duration metric: took 153.352439ms for postStartSetup
	I1205 07:45:23.058350  657081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:23.058429  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:23.080748  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.183601  657081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:23.188773  657081 fix.go:56] duration metric: took 6.82883206s for fixHost
	I1205 07:45:23.188802  657081 start.go:83] releasing machines lock for "pause-908773", held for 6.828883573s
	I1205 07:45:23.188872  657081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-908773
	I1205 07:45:23.206595  657081 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:23.206650  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:23.206657  657081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:23.206718  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:23.232844  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.232953  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.424708  657081 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:23.431192  657081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:45:23.473711  657081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:23.478138  657081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:23.478276  657081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:23.486308  657081 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:23.486334  657081 start.go:496] detecting cgroup driver to use...
	I1205 07:45:23.486365  657081 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:23.486447  657081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:45:23.502006  657081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:45:23.515547  657081 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:23.515642  657081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:23.531113  657081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:23.544093  657081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:23.679248  657081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:23.825252  657081 docker.go:234] disabling docker service ...
	I1205 07:45:23.825378  657081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:23.840916  657081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:23.854264  657081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:23.990216  657081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:24.164406  657081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:24.178270  657081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:24.194552  657081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:45:24.194621  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.204215  657081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 07:45:24.204294  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.213101  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.222025  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.231692  657081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:24.240040  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.248869  657081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.257522  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.267732  657081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:24.275323  657081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:24.282777  657081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:24.419694  657081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:45:24.641142  657081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:45:24.641230  657081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:45:24.645450  657081 start.go:564] Will wait 60s for crictl version
	I1205 07:45:24.645571  657081 ssh_runner.go:195] Run: which crictl
	I1205 07:45:24.649286  657081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:24.673638  657081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:45:24.673721  657081 ssh_runner.go:195] Run: crio --version
	I1205 07:45:24.701821  657081 ssh_runner.go:195] Run: crio --version
	I1205 07:45:24.735394  657081 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 07:45:24.738286  657081 cli_runner.go:164] Run: docker network inspect pause-908773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:24.755052  657081 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:24.759107  657081 kubeadm.go:884] updating cluster {Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:24.759250  657081 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:45:24.759320  657081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:24.813019  657081 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:45:24.813045  657081 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:45:24.813103  657081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:24.840960  657081 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:45:24.840984  657081 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:24.840992  657081 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1205 07:45:24.841119  657081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-908773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:24.841251  657081 ssh_runner.go:195] Run: crio config
	I1205 07:45:24.908482  657081 cni.go:84] Creating CNI manager for ""
	I1205 07:45:24.908548  657081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:45:24.908586  657081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:45:24.908634  657081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-908773 NodeName:pause-908773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:24.908810  657081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-908773"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:24.908929  657081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:45:24.916930  657081 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:24.917006  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:24.925030  657081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1205 07:45:24.938059  657081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:45:24.951613  657081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1205 07:45:24.964930  657081 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:24.968782  657081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:25.107038  657081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:25.123203  657081 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773 for IP: 192.168.85.2
	I1205 07:45:25.123228  657081 certs.go:195] generating shared ca certs ...
	I1205 07:45:25.123244  657081 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:25.123414  657081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 07:45:25.123474  657081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 07:45:25.123487  657081 certs.go:257] generating profile certs ...
	I1205 07:45:25.123595  657081 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.key
	I1205 07:45:25.123674  657081 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/apiserver.key.c3fe9f90
	I1205 07:45:25.123731  657081 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/proxy-client.key
	I1205 07:45:25.123854  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 07:45:25.123892  657081 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:25.123905  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:25.123933  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:45:25.123960  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:25.123991  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:25.124043  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:45:25.124732  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:25.147411  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:25.167942  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:25.187536  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:25.206338  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 07:45:25.225260  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:25.248759  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:25.268702  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:45:25.286505  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:25.304903  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 07:45:25.323644  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 07:45:25.341824  657081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:25.355715  657081 ssh_runner.go:195] Run: openssl version
	I1205 07:45:25.362447  657081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.370299  657081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:25.378577  657081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.382606  657081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.382759  657081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.424190  657081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:25.431983  657081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.439524  657081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 07:45:25.449442  657081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.453410  657081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.453498  657081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.495114  657081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:25.503179  657081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.511459  657081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 07:45:25.519648  657081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.524015  657081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.524091  657081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.567512  657081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:25.575141  657081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:25.580755  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:25.623849  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:25.666077  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:25.712208  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:25.755444  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:25.796193  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:25.837004  657081 kubeadm.go:401] StartCluster: {Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.837121  657081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:25.837191  657081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:25.864767  657081 cri.go:89] found id: "a2d60339bcc9865d682e8362f647ffc1c462b4e38c92889a3d0cd42a17e90ea1"
	I1205 07:45:25.864791  657081 cri.go:89] found id: "d68733d7f972f0ded1c6971d99d99c7001a5e4f6aa595c25786f1f8bfe47cb97"
	I1205 07:45:25.864796  657081 cri.go:89] found id: "ac3860d5f81074887386ff7e8edda702dca82f8d5e9082dbc0dfd651a6eea9e7"
	I1205 07:45:25.864803  657081 cri.go:89] found id: "497dce47fc69651d8415137bdc4063272942fd82f265e98f60f6540dbc963f2f"
	I1205 07:45:25.864806  657081 cri.go:89] found id: "00c4de126b652c2eeabcc39993379dbbb81479999412ec63e2f1384ef3779896"
	I1205 07:45:25.864810  657081 cri.go:89] found id: "fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2"
	I1205 07:45:25.864813  657081 cri.go:89] found id: "9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623"
	I1205 07:45:25.864816  657081 cri.go:89] found id: ""
	I1205 07:45:25.864874  657081 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:45:25.879561  657081 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:45:25Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:45:25.879647  657081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:25.887702  657081 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:25.887721  657081 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:25.887805  657081 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:25.895227  657081 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:25.895909  657081 kubeconfig.go:125] found "pause-908773" server: "https://192.168.85.2:8443"
	I1205 07:45:25.896752  657081 kapi.go:59] client config for pause-908773: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 07:45:25.897261  657081 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 07:45:25.897283  657081 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 07:45:25.897290  657081 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 07:45:25.897299  657081 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 07:45:25.897303  657081 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 07:45:25.897596  657081 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:25.905335  657081 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 07:45:25.905376  657081 kubeadm.go:602] duration metric: took 17.648638ms to restartPrimaryControlPlane
	I1205 07:45:25.905386  657081 kubeadm.go:403] duration metric: took 68.392432ms to StartCluster
	I1205 07:45:25.905421  657081 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:25.905497  657081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:45:25.906317  657081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:25.906633  657081 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:45:25.906978  657081 config.go:182] Loaded profile config "pause-908773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:45:25.907030  657081 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:25.911186  657081 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:25.911186  657081 out.go:179] * Enabled addons: 
	I1205 07:45:25.914058  657081 addons.go:530] duration metric: took 7.023307ms for enable addons: enabled=[]
	I1205 07:45:25.914101  657081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:26.041447  657081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:26.056480  657081 node_ready.go:35] waiting up to 6m0s for node "pause-908773" to be "Ready" ...
	I1205 07:45:31.216626  657081 node_ready.go:49] node "pause-908773" is "Ready"
	I1205 07:45:31.216654  657081 node_ready.go:38] duration metric: took 5.160124588s for node "pause-908773" to be "Ready" ...
	I1205 07:45:31.216670  657081 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:45:31.216732  657081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:31.235916  657081 api_server.go:72] duration metric: took 5.329243197s to wait for apiserver process to appear ...
	I1205 07:45:31.235939  657081 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:45:31.235960  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:31.286548  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:45:31.286625  657081 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:45:31.736157  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:31.745167  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:45:31.745314  657081 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:45:32.236495  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:32.247993  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:45:32.248072  657081 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:45:32.736678  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:32.745888  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 07:45:32.747571  657081 api_server.go:141] control plane version: v1.34.2
	I1205 07:45:32.747641  657081 api_server.go:131] duration metric: took 1.51169458s to wait for apiserver health ...
	I1205 07:45:32.747666  657081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:45:32.752995  657081 system_pods.go:59] 7 kube-system pods found
	I1205 07:45:32.753034  657081 system_pods.go:61] "coredns-66bc5c9577-kc28g" [e5a81df8-535c-43be-9ffc-f298e707b2d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:45:32.753043  657081 system_pods.go:61] "etcd-pause-908773" [db7ad305-cb33-4abd-841a-4d6e331f2dfb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:45:32.753048  657081 system_pods.go:61] "kindnet-56nmf" [f3318b0d-6053-43e1-a02a-88240d3a4e98] Running
	I1205 07:45:32.753054  657081 system_pods.go:61] "kube-apiserver-pause-908773" [63ab9c97-d105-4f56-8d07-5ee051d1ff31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:45:32.753065  657081 system_pods.go:61] "kube-controller-manager-pause-908773" [60682c22-3323-43ad-b446-ee30dd08a77a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:45:32.753069  657081 system_pods.go:61] "kube-proxy-jszzp" [4bd9a8f2-e90f-4ba9-991e-7647e3b203bc] Running
	I1205 07:45:32.753082  657081 system_pods.go:61] "kube-scheduler-pause-908773" [14c8c36c-b499-4498-8b88-1c64d27a951b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:45:32.753091  657081 system_pods.go:74] duration metric: took 5.404416ms to wait for pod list to return data ...
	I1205 07:45:32.753099  657081 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:45:32.756403  657081 default_sa.go:45] found service account: "default"
	I1205 07:45:32.756423  657081 default_sa.go:55] duration metric: took 3.317309ms for default service account to be created ...
	I1205 07:45:32.756432  657081 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:45:32.759864  657081 system_pods.go:86] 7 kube-system pods found
	I1205 07:45:32.759938  657081 system_pods.go:89] "coredns-66bc5c9577-kc28g" [e5a81df8-535c-43be-9ffc-f298e707b2d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:45:32.759961  657081 system_pods.go:89] "etcd-pause-908773" [db7ad305-cb33-4abd-841a-4d6e331f2dfb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:45:32.759982  657081 system_pods.go:89] "kindnet-56nmf" [f3318b0d-6053-43e1-a02a-88240d3a4e98] Running
	I1205 07:45:32.760021  657081 system_pods.go:89] "kube-apiserver-pause-908773" [63ab9c97-d105-4f56-8d07-5ee051d1ff31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:45:32.760046  657081 system_pods.go:89] "kube-controller-manager-pause-908773" [60682c22-3323-43ad-b446-ee30dd08a77a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:45:32.760064  657081 system_pods.go:89] "kube-proxy-jszzp" [4bd9a8f2-e90f-4ba9-991e-7647e3b203bc] Running
	I1205 07:45:32.760097  657081 system_pods.go:89] "kube-scheduler-pause-908773" [14c8c36c-b499-4498-8b88-1c64d27a951b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:45:32.760121  657081 system_pods.go:126] duration metric: took 3.68144ms to wait for k8s-apps to be running ...
	I1205 07:45:32.760173  657081 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:45:32.760279  657081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:45:32.774267  657081 system_svc.go:56] duration metric: took 14.119093ms WaitForService to wait for kubelet
	I1205 07:45:32.774343  657081 kubeadm.go:587] duration metric: took 6.867672886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:45:32.774432  657081 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:45:32.780597  657081 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 07:45:32.780672  657081 node_conditions.go:123] node cpu capacity is 2
	I1205 07:45:32.780698  657081 node_conditions.go:105] duration metric: took 6.240594ms to run NodePressure ...
	I1205 07:45:32.780722  657081 start.go:242] waiting for startup goroutines ...
	I1205 07:45:32.780756  657081 start.go:247] waiting for cluster config update ...
	I1205 07:45:32.780781  657081 start.go:256] writing updated cluster config ...
	I1205 07:45:32.781138  657081 ssh_runner.go:195] Run: rm -f paused
	I1205 07:45:32.785205  657081 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:45:32.785951  657081 kapi.go:59] client config for pause-908773: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 07:45:32.791388  657081 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kc28g" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:45:34.797445  657081 pod_ready.go:104] pod "coredns-66bc5c9577-kc28g" is not "Ready", error: <nil>
	W1205 07:45:36.798495  657081 pod_ready.go:104] pod "coredns-66bc5c9577-kc28g" is not "Ready", error: <nil>
	I1205 07:45:38.797243  657081 pod_ready.go:94] pod "coredns-66bc5c9577-kc28g" is "Ready"
	I1205 07:45:38.797273  657081 pod_ready.go:86] duration metric: took 6.00581352s for pod "coredns-66bc5c9577-kc28g" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:38.799825  657081 pod_ready.go:83] waiting for pod "etcd-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:38.804483  657081 pod_ready.go:94] pod "etcd-pause-908773" is "Ready"
	I1205 07:45:38.804513  657081 pod_ready.go:86] duration metric: took 4.662942ms for pod "etcd-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:38.806934  657081 pod_ready.go:83] waiting for pod "kube-apiserver-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:45:40.811954  657081 pod_ready.go:104] pod "kube-apiserver-pause-908773" is not "Ready", error: <nil>
	W1205 07:45:42.813460  657081 pod_ready.go:104] pod "kube-apiserver-pause-908773" is not "Ready", error: <nil>
	I1205 07:45:45.312638  657081 pod_ready.go:94] pod "kube-apiserver-pause-908773" is "Ready"
	I1205 07:45:45.312668  657081 pod_ready.go:86] duration metric: took 6.505698922s for pod "kube-apiserver-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.315462  657081 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.320235  657081 pod_ready.go:94] pod "kube-controller-manager-pause-908773" is "Ready"
	I1205 07:45:45.320307  657081 pod_ready.go:86] duration metric: took 4.820999ms for pod "kube-controller-manager-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.322976  657081 pod_ready.go:83] waiting for pod "kube-proxy-jszzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.328127  657081 pod_ready.go:94] pod "kube-proxy-jszzp" is "Ready"
	I1205 07:45:45.328157  657081 pod_ready.go:86] duration metric: took 5.15309ms for pod "kube-proxy-jszzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.330766  657081 pod_ready.go:83] waiting for pod "kube-scheduler-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.510956  657081 pod_ready.go:94] pod "kube-scheduler-pause-908773" is "Ready"
	I1205 07:45:45.510989  657081 pod_ready.go:86] duration metric: took 180.196818ms for pod "kube-scheduler-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.511002  657081 pod_ready.go:40] duration metric: took 12.725722599s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:45:45.564247  657081 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1205 07:45:45.567351  657081 out.go:179] * Done! kubectl is now configured to use "pause-908773" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.444575401Z" level=info msg="Started container" PID=2348 containerID=c1f9ec37476718129f8a3378815dbe24785ceca8224b5337bb73412c8c2f5294 description=kube-system/coredns-66bc5c9577-kc28g/coredns id=afd3dfd8-7f53-4d47-b7f7-449740855db7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=20b2ab50e7e7bae4ddbd49ccf4ca253674ed27a728d1a63356bc9de41997b201
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.446342365Z" level=info msg="Starting container: 1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b" id=b64c6347-9fb7-47f1-bf54-e099e53e8cf3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.447430813Z" level=info msg="Created container bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3: kube-system/etcd-pause-908773/etcd" id=ef65bc48-e20f-4e83-b5a1-8e1deba0d370 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.45103816Z" level=info msg="Started container" PID=2353 containerID=d094a8686e5ba0f3ec3a732e3a223076a61da4f5dd90c62847f3e01829326cb3 description=kube-system/kindnet-56nmf/kindnet-cni id=09bcb047-60fd-44e2-ab4f-b2c12fa420fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c2044a22dc9dc9db7d082952234955f745fa8e0dc53fd19c8b5c5208e8a8229
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.458600478Z" level=info msg="Starting container: bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3" id=24479ec0-ad28-4de3-a9a0-cc80f3cf3291 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.463044718Z" level=info msg="Started container" PID=2367 containerID=1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b description=kube-system/kube-proxy-jszzp/kube-proxy id=b64c6347-9fb7-47f1-bf54-e099e53e8cf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff955faf03a3c53199467294dd8d1c3104b67229f6272fdfe53f54c496937e88
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.475959184Z" level=info msg="Started container" PID=2368 containerID=bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3 description=kube-system/etcd-pause-908773/etcd id=24479ec0-ad28-4de3-a9a0-cc80f3cf3291 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a29fcda1b070339ace0bf1c4bccbd0062ebc7ee90a752d7e8ac5f53b85e5a8b
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.503868225Z" level=info msg="Created container 42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6: kube-system/kube-scheduler-pause-908773/kube-scheduler" id=db60ff9c-89ba-4797-9d91-94da7ed4f660 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.504453225Z" level=info msg="Starting container: 42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6" id=888bca54-f4d1-4ab6-917d-5d675ee7a14e name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.506903299Z" level=info msg="Started container" PID=2395 containerID=42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6 description=kube-system/kube-scheduler-pause-908773/kube-scheduler id=888bca54-f4d1-4ab6-917d-5d675ee7a14e name=/runtime.v1.RuntimeService/StartContainer sandboxID=973d4809a74e772499404dbedea7d344fb73c689cfbd1d31ce868751f3fbc238
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.824271555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.828066622Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.82810567Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.828127209Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.832190735Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.832353395Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.832425831Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.835675882Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.835820031Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.835917723Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.840067083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.84023031Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.840314011Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.843481205Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.843516504Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	42a3834bb006a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   21 seconds ago       Running             kube-scheduler            1                   973d4809a74e7       kube-scheduler-pause-908773            kube-system
	bd881eebfd2b3       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   0a29fcda1b070       etcd-pause-908773                      kube-system
	1c4173a6bf2a1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   ff955faf03a3c       kube-proxy-jszzp                       kube-system
	c1f9ec3747671       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   20b2ab50e7e7b       coredns-66bc5c9577-kc28g               kube-system
	d094a8686e5ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   3c2044a22dc9d       kindnet-56nmf                          kube-system
	85fa8e67c2256       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   5b107176fe0a1       kube-controller-manager-pause-908773   kube-system
	b0092a97b6fe6       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   2e6fb8838b99d       kube-apiserver-pause-908773            kube-system
	a2d60339bcc98       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   20b2ab50e7e7b       coredns-66bc5c9577-kc28g               kube-system
	d68733d7f972f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   3c2044a22dc9d       kindnet-56nmf                          kube-system
	ac3860d5f8107       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   ff955faf03a3c       kube-proxy-jszzp                       kube-system
	497dce47fc696       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   2e6fb8838b99d       kube-apiserver-pause-908773            kube-system
	00c4de126b652       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   5b107176fe0a1       kube-controller-manager-pause-908773   kube-system
	fd1160cc4bfca       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   0a29fcda1b070       etcd-pause-908773                      kube-system
	9a3d6d291d5b0       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   973d4809a74e7       kube-scheduler-pause-908773            kube-system
	
	
	==> coredns [a2d60339bcc9865d682e8362f647ffc1c462b4e38c92889a3d0cd42a17e90ea1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35040 - 55957 "HINFO IN 8925664347791367152.3814835325500385008. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021717407s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c1f9ec37476718129f8a3378815dbe24785ceca8224b5337bb73412c8c2f5294] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57994 - 49934 "HINFO IN 8513627343155943591.3042283095642630719. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02604761s
	
	
	==> describe nodes <==
	Name:               pause-908773
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-908773
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=pause-908773
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_44_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-908773
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:45:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:44:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:44:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:44:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:45:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-908773
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                d64ce5e1-7c0e-4070-8226-19bec79dffeb
	  Boot ID:                    6438d548-ea0a-487b-93bc-8af12c014d83
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-kc28g                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-908773                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-56nmf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-908773             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-908773    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-jszzp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-908773             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 75s   kube-proxy       
	  Normal   Starting                 16s   kube-proxy       
	  Normal   Starting                 82s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s   kubelet          Node pause-908773 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s   kubelet          Node pause-908773 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s   kubelet          Node pause-908773 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s   node-controller  Node pause-908773 event: Registered Node pause-908773 in Controller
	  Normal   NodeReady                35s   kubelet          Node pause-908773 status is now: NodeReady
	  Normal   RegisteredNode           14s   node-controller  Node pause-908773 event: Registered Node pause-908773 in Controller
	
	
	==> dmesg <==
	[ +33.737398] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:10] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:11] overlayfs: idmapped layers are currently not supported
	[  +3.073089] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:12] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:13] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:14] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:19] overlayfs: idmapped layers are currently not supported
	[ +33.161652] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:21] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:22] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:23] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:24] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:25] overlayfs: idmapped layers are currently not supported
	[ +19.047599] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:26] overlayfs: idmapped layers are currently not supported
	[ +16.337115] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:27] overlayfs: idmapped layers are currently not supported
	[ +25.534355] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:28] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:30] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:32] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:33] overlayfs: idmapped layers are currently not supported
	[ +28.256020] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3] <==
	{"level":"warn","ts":"2025-12-05T07:45:28.765734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:28.814759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:28.886763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:28.952337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.000779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.039846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.060048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.076442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.115260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.174597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.216727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.227389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.276564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.284790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.295112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.329401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.378701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.395050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.423217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.440539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.474686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.522238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.542906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.556957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.695294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	
	
	==> etcd [fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2] <==
	{"level":"warn","ts":"2025-12-05T07:44:23.436405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.453721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.471703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.496002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.519138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.539000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.635989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41496","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T07:45:17.699806Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-05T07:45:17.699847Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-908773","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-05T07:45:17.699932Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T07:45:17.845807Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T07:45:17.847336Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T07:45:17.847389Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847411Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-05T07:45:17.847451Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847459Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-05T07:45:17.847466Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-12-05T07:45:17.847469Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847510Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847523Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-05T07:45:17.847529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T07:45:17.851004Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-05T07:45:17.851088Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T07:45:17.851121Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-05T07:45:17.851137Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-908773","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 07:45:48 up  4:27,  0 user,  load average: 2.70, 1.77, 1.85
	Linux pause-908773 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d094a8686e5ba0f3ec3a732e3a223076a61da4f5dd90c62847f3e01829326cb3] <==
	I1205 07:45:26.619303       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:45:26.619705       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1205 07:45:26.619886       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:45:26.619932       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:45:26.619966       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:45:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:45:26.823370       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:45:26.830418       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:45:26.830485       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:45:26.832770       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:45:31.434451       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:45:31.434497       1 metrics.go:72] Registering metrics
	I1205 07:45:31.434577       1 controller.go:711] "Syncing nftables rules"
	I1205 07:45:36.823892       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:45:36.823941       1 main.go:301] handling current node
	I1205 07:45:46.823743       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:45:46.823773       1 main.go:301] handling current node
	
	
	==> kindnet [d68733d7f972f0ded1c6971d99d99c7001a5e4f6aa595c25786f1f8bfe47cb97] <==
	I1205 07:44:33.022172       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:44:33.022570       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1205 07:44:33.022690       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:44:33.022702       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:44:33.022716       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:44:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:44:33.227484       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:44:33.227503       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:44:33.227512       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:44:33.227885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1205 07:45:03.227154       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1205 07:45:03.227167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1205 07:45:03.227274       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1205 07:45:03.228638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1205 07:45:04.628374       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:45:04.628407       1 metrics.go:72] Registering metrics
	I1205 07:45:04.628470       1 controller.go:711] "Syncing nftables rules"
	I1205 07:45:13.232379       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:45:13.232474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [497dce47fc69651d8415137bdc4063272942fd82f265e98f60f6540dbc963f2f] <==
	W1205 07:45:17.715551       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.715947       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.715977       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716003       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716027       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716052       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716078       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716107       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716131       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716155       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716179       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716204       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716229       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716256       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717748       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717775       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717797       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717818       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718030       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718070       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718560       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718586       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718612       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718926       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b0092a97b6fe65c8ffaef671961badb67a3e4d6d62d80c42d2051994069dcaae] <==
	I1205 07:45:31.203821       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:45:31.203840       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:45:31.237281       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1205 07:45:31.270212       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1205 07:45:31.270269       1 policy_source.go:240] refreshing policies
	I1205 07:45:31.271000       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:45:31.272868       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:45:31.279606       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1205 07:45:31.280038       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 07:45:31.280072       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 07:45:31.281690       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1205 07:45:31.281728       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 07:45:31.296239       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:45:31.296305       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 07:45:31.302561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1205 07:45:31.304143       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:45:31.383390       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:45:31.396528       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 07:45:31.403131       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:45:31.720924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:45:32.833930       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:45:34.284639       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:45:34.384241       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:45:34.438533       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:45:34.538233       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [00c4de126b652c2eeabcc39993379dbbb81479999412ec63e2f1384ef3779896] <==
	I1205 07:44:31.446882       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1205 07:44:31.451529       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:44:31.458928       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 07:44:31.478402       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:44:31.478509       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1205 07:44:31.478629       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 07:44:31.478677       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 07:44:31.478522       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 07:44:31.478645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 07:44:31.478736       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:44:31.481810       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 07:44:31.481868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:44:31.490080       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1205 07:44:31.492318       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:44:31.492432       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:44:31.492513       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-908773"
	I1205 07:44:31.492575       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1205 07:44:31.498852       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 07:44:31.517281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 07:44:31.526045       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:44:31.527192       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:44:31.527306       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:44:31.527342       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 07:44:31.527356       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 07:45:16.499115       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [85fa8e67c22567024fa7b705e2ed7964a9445424ae54ee1525dc8ba5fb4a3a6a] <==
	I1205 07:45:34.144977       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1205 07:45:34.151342       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1205 07:45:34.153608       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:45:34.156853       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 07:45:34.159098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 07:45:34.162403       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 07:45:34.170607       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:45:34.171779       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:45:34.175015       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 07:45:34.176232       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:45:34.176327       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1205 07:45:34.177464       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1205 07:45:34.177500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 07:45:34.177576       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:45:34.177892       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:45:34.178007       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:45:34.178096       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 07:45:34.178152       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1205 07:45:34.178145       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-908773"
	I1205 07:45:34.178334       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 07:45:34.178438       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1205 07:45:34.185006       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:45:34.185027       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 07:45:34.185034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 07:45:34.187874       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b] <==
	I1205 07:45:26.591013       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:45:27.579490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:45:31.394446       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:45:31.410459       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1205 07:45:31.449129       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:45:32.055528       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:45:32.055646       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:45:32.125591       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:45:32.125920       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:45:32.134500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:45:32.144122       1 config.go:200] "Starting service config controller"
	I1205 07:45:32.144210       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:45:32.144251       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:45:32.144310       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:45:32.144346       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:45:32.144372       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:45:32.146159       1 config.go:309] "Starting node config controller"
	I1205 07:45:32.173026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:45:32.173127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:45:32.249541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:45:32.257360       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:45:32.257718       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ac3860d5f81074887386ff7e8edda702dca82f8d5e9082dbc0dfd651a6eea9e7] <==
	I1205 07:44:33.041022       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:44:33.200314       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:44:33.301158       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:44:33.301200       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1205 07:44:33.301298       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:44:33.365055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:44:33.365200       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:44:33.370993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:44:33.371404       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:44:33.371600       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:44:33.372958       1 config.go:200] "Starting service config controller"
	I1205 07:44:33.373017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:44:33.373056       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:44:33.373082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:44:33.373132       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:44:33.373173       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:44:33.373833       1 config.go:309] "Starting node config controller"
	I1205 07:44:33.373888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:44:33.373927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:44:33.474094       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:44:33.474125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:44:33.474149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6] <==
	I1205 07:45:28.877645       1 serving.go:386] Generated self-signed cert in-memory
	I1205 07:45:31.980367       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 07:45:31.991008       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:45:32.017861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:45:32.018958       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1205 07:45:32.019050       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1205 07:45:32.019137       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:45:32.021330       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:32.035366       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:32.026462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 07:45:32.035843       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 07:45:32.123448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1205 07:45:32.136646       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:32.136596       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623] <==
	E1205 07:44:25.175499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 07:44:25.175586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 07:44:25.175638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 07:44:25.175702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 07:44:25.175951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 07:44:25.176117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 07:44:25.176259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 07:44:25.176458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 07:44:25.178561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 07:44:25.178767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:44:25.178881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 07:44:25.179134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 07:44:25.179200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 07:44:25.179260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 07:44:25.179328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 07:44:25.179409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 07:44:25.179471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:44:25.179540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1205 07:44:26.561078       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:17.696604       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1205 07:45:17.696631       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1205 07:45:17.696655       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1205 07:45:17.696680       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:17.696805       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1205 07:45:17.696818       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.284403    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="04c7812fd289db76f39fbc2b2cae5a9f" pod="kube-system/kube-controller-manager-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.284533    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0e094c2614ecc034efbe45f620b6d31a" pod="kube-system/kube-scheduler-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: I1205 07:45:26.291035    1302 scope.go:117] "RemoveContainer" containerID="fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.291667    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb30420b19ae6f4e6ba10fe4524a0bed" pod="kube-system/kube-apiserver-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.292055    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f8027ba6b18dd5d42b44eb6d3c739a7" pod="kube-system/etcd-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.292408    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="04c7812fd289db76f39fbc2b2cae5a9f" pod="kube-system/kube-controller-manager-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.300193    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0e094c2614ecc034efbe45f620b6d31a" pod="kube-system/kube-scheduler-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.304597    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-56nmf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f3318b0d-6053-43e1-a02a-88240d3a4e98" pod="kube-system/kindnet-56nmf"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.305020    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jszzp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4bd9a8f2-e90f-4ba9-991e-7647e3b203bc" pod="kube-system/kube-proxy-jszzp"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.305368    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-kc28g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5a81df8-535c-43be-9ffc-f298e707b2d5" pod="kube-system/coredns-66bc5c9577-kc28g"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: I1205 07:45:26.310589    1302 scope.go:117] "RemoveContainer" containerID="9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.311184    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f8027ba6b18dd5d42b44eb6d3c739a7" pod="kube-system/etcd-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.311702    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="04c7812fd289db76f39fbc2b2cae5a9f" pod="kube-system/kube-controller-manager-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.312832    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0e094c2614ecc034efbe45f620b6d31a" pod="kube-system/kube-scheduler-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.317913    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-56nmf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f3318b0d-6053-43e1-a02a-88240d3a4e98" pod="kube-system/kindnet-56nmf"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.322132    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jszzp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4bd9a8f2-e90f-4ba9-991e-7647e3b203bc" pod="kube-system/kube-proxy-jszzp"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.322364    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-kc28g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5a81df8-535c-43be-9ffc-f298e707b2d5" pod="kube-system/coredns-66bc5c9577-kc28g"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.322604    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb30420b19ae6f4e6ba10fe4524a0bed" pod="kube-system/kube-apiserver-pause-908773"
	Dec 05 07:45:30 pause-908773 kubelet[1302]: E1205 07:45:30.899779    1302 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-908773\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-908773' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 05 07:45:30 pause-908773 kubelet[1302]: E1205 07:45:30.900443    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-56nmf\" is forbidden: User \"system:node:pause-908773\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-908773' and this object" podUID="f3318b0d-6053-43e1-a02a-88240d3a4e98" pod="kube-system/kindnet-56nmf"
	Dec 05 07:45:31 pause-908773 kubelet[1302]: E1205 07:45:31.140213    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-jszzp\" is forbidden: User \"system:node:pause-908773\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-908773' and this object" podUID="4bd9a8f2-e90f-4ba9-991e-7647e3b203bc" pod="kube-system/kube-proxy-jszzp"
	Dec 05 07:45:37 pause-908773 kubelet[1302]: W1205 07:45:37.240262    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 05 07:45:46 pause-908773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:45:46 pause-908773 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:45:46 pause-908773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-908773 -n pause-908773
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-908773 -n pause-908773: exit status 2 (360.783533ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-908773 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-908773
helpers_test.go:243: (dbg) docker inspect pause-908773:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de",
	        "Created": "2025-12-05T07:44:05.178977974Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 654556,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-05T07:44:05.245204233Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/hostname",
	        "HostsPath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/hosts",
	        "LogPath": "/var/lib/docker/containers/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de/edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de-json.log",
	        "Name": "/pause-908773",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-908773:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-908773",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "edf442d4a4e362143d1d1e34fed4e86176dda33f6c6ff4c8c2014f8676b6e3de",
	                "LowerDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def-init/diff:/var/lib/docker/overlay2/a3f3952b992fe590f5cdfb74e36830e84a240b65b06dee5e7122e6ff293d0cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b41a48527a5f37150098ae5fb7ba6ea6df09c7f9ca81df32d0bd1a00085c2def/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-908773",
	                "Source": "/var/lib/docker/volumes/pause-908773/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-908773",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-908773",
	                "name.minikube.sigs.k8s.io": "pause-908773",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f07ad93579a48743e1636244c3580a6795093af817ed5b3130acfe5bba204569",
	            "SandboxKey": "/var/run/docker/netns/f07ad93579a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-908773": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:7c:29:1f:b2:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67fcaa158c8d7e5509bd17a7693b9a7f5caaad1d628c6c36388cca528c02cf2f",
	                    "EndpointID": "3281418e267bf62db907be1047490b2734749fa8ff65e7218c47892b2579700f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-908773",
	                        "edf442d4a4e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-908773 -n pause-908773
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-908773 -n pause-908773: exit status 2 (343.341732ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-908773 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-908773 logs -n 25: (1.411231909s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-587853 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p missing-upgrade-168812 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-168812    │ jenkins │ v1.35.0 │ 05 Dec 25 07:31 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ delete  │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ ssh     │ -p NoKubernetes-587853 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │                     │
	│ start   │ -p missing-upgrade-168812 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-168812    │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:32 UTC │
	│ start   │ -p NoKubernetes-587853 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:32 UTC │ 05 Dec 25 07:33 UTC │
	│ ssh     │ -p NoKubernetes-587853 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │                     │
	│ delete  │ -p NoKubernetes-587853                                                                                                                          │ NoKubernetes-587853       │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ delete  │ -p missing-upgrade-168812                                                                                                                       │ missing-upgrade-168812    │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ stop    │ -p kubernetes-upgrade-421996                                                                                                                    │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:33 UTC │
	│ start   │ -p kubernetes-upgrade-421996 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-421996 │ jenkins │ v1.37.0 │ 05 Dec 25 07:33 UTC │                     │
	│ start   │ -p stopped-upgrade-837565 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-837565    │ jenkins │ v1.35.0 │ 05 Dec 25 07:33 UTC │ 05 Dec 25 07:34 UTC │
	│ stop    │ stopped-upgrade-837565 stop                                                                                                                     │ stopped-upgrade-837565    │ jenkins │ v1.35.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:34 UTC │
	│ start   │ -p stopped-upgrade-837565 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-837565    │ jenkins │ v1.37.0 │ 05 Dec 25 07:34 UTC │ 05 Dec 25 07:38 UTC │
	│ delete  │ -p stopped-upgrade-837565                                                                                                                       │ stopped-upgrade-837565    │ jenkins │ v1.37.0 │ 05 Dec 25 07:38 UTC │ 05 Dec 25 07:38 UTC │
	│ start   │ -p running-upgrade-685187 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-685187    │ jenkins │ v1.35.0 │ 05 Dec 25 07:38 UTC │ 05 Dec 25 07:39 UTC │
	│ start   │ -p running-upgrade-685187 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-685187    │ jenkins │ v1.37.0 │ 05 Dec 25 07:39 UTC │ 05 Dec 25 07:43 UTC │
	│ delete  │ -p running-upgrade-685187                                                                                                                       │ running-upgrade-685187    │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │ 05 Dec 25 07:43 UTC │
	│ start   │ -p pause-908773 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:43 UTC │ 05 Dec 25 07:45 UTC │
	│ start   │ -p pause-908773 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │ 05 Dec 25 07:45 UTC │
	│ pause   │ -p pause-908773 --alsologtostderr -v=5                                                                                                          │ pause-908773              │ jenkins │ v1.37.0 │ 05 Dec 25 07:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 07:45:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 07:45:16.112177  657081 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:45:16.112357  657081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:16.112389  657081 out.go:374] Setting ErrFile to fd 2...
	I1205 07:45:16.112411  657081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:45:16.112817  657081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:45:16.113446  657081 out.go:368] Setting JSON to false
	I1205 07:45:16.115020  657081 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16044,"bootTime":1764904673,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:45:16.115104  657081 start.go:143] virtualization:  
	I1205 07:45:16.118664  657081 out.go:179] * [pause-908773] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:45:16.122371  657081 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:45:16.122630  657081 notify.go:221] Checking for updates...
	I1205 07:45:16.128211  657081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:45:16.131023  657081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:45:16.133821  657081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:45:16.136721  657081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:45:16.139696  657081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:45:16.148692  657081 config.go:182] Loaded profile config "pause-908773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:45:16.149370  657081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:45:16.178743  657081 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:45:16.178987  657081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:16.241063  657081 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-05 07:45:16.231834991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:16.241170  657081 docker.go:319] overlay module found
	I1205 07:45:16.244356  657081 out.go:179] * Using the docker driver based on existing profile
	I1205 07:45:16.247229  657081 start.go:309] selected driver: docker
	I1205 07:45:16.247266  657081 start.go:927] validating driver "docker" against &{Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:16.247380  657081 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:45:16.247479  657081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:45:16.319611  657081 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-05 07:45:16.30908936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:45:16.320020  657081 cni.go:84] Creating CNI manager for ""
	I1205 07:45:16.320084  657081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:45:16.320129  657081 start.go:353] cluster config:
	{Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:16.325304  657081 out.go:179] * Starting "pause-908773" primary control-plane node in "pause-908773" cluster
	I1205 07:45:16.328081  657081 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 07:45:16.330940  657081 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1205 07:45:16.333710  657081 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:45:16.333754  657081 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1205 07:45:16.333764  657081 cache.go:65] Caching tarball of preloaded images
	I1205 07:45:16.333799  657081 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 07:45:16.333853  657081 preload.go:238] Found /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 07:45:16.333863  657081 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1205 07:45:16.334010  657081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/config.json ...
	I1205 07:45:16.359773  657081 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 07:45:16.359792  657081 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1205 07:45:16.359815  657081 cache.go:243] Successfully downloaded all kic artifacts
	I1205 07:45:16.359845  657081 start.go:360] acquireMachinesLock for pause-908773: {Name:mk8eb780af7305b8a0daa8238dc7d1e4fe5cbafe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 07:45:16.359909  657081 start.go:364] duration metric: took 47.156µs to acquireMachinesLock for "pause-908773"
	I1205 07:45:16.359929  657081 start.go:96] Skipping create...Using existing machine configuration
	I1205 07:45:16.359934  657081 fix.go:54] fixHost starting: 
	I1205 07:45:16.360186  657081 cli_runner.go:164] Run: docker container inspect pause-908773 --format={{.State.Status}}
	I1205 07:45:16.378200  657081 fix.go:112] recreateIfNeeded on pause-908773: state=Running err=<nil>
	W1205 07:45:16.378229  657081 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 07:45:16.381374  657081 out.go:252] * Updating the running docker "pause-908773" container ...
	I1205 07:45:16.381412  657081 machine.go:94] provisionDockerMachine start ...
	I1205 07:45:16.381492  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:16.401053  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:16.401380  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:16.401397  657081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1205 07:45:16.551265  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-908773
	
	I1205 07:45:16.551293  657081 ubuntu.go:182] provisioning hostname "pause-908773"
	I1205 07:45:16.551360  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:16.580932  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:16.581240  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:16.581256  657081 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-908773 && echo "pause-908773" | sudo tee /etc/hostname
	I1205 07:45:16.739891  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-908773
	
	I1205 07:45:16.739967  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:16.757607  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:16.757966  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:16.757987  657081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-908773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-908773/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-908773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 07:45:16.906956  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1205 07:45:16.906980  657081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-441321/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-441321/.minikube}
	I1205 07:45:16.907004  657081 ubuntu.go:190] setting up certificates
	I1205 07:45:16.907014  657081 provision.go:84] configureAuth start
	I1205 07:45:16.907115  657081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-908773
	I1205 07:45:16.925748  657081 provision.go:143] copyHostCerts
	I1205 07:45:16.925822  657081 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem, removing ...
	I1205 07:45:16.925837  657081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem
	I1205 07:45:16.925913  657081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/key.pem (1675 bytes)
	I1205 07:45:16.926066  657081 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem, removing ...
	I1205 07:45:16.926073  657081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem
	I1205 07:45:16.926101  657081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/ca.pem (1082 bytes)
	I1205 07:45:16.926150  657081 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem, removing ...
	I1205 07:45:16.926155  657081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem
	I1205 07:45:16.926176  657081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-441321/.minikube/cert.pem (1123 bytes)
	I1205 07:45:16.926219  657081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem org=jenkins.pause-908773 san=[127.0.0.1 192.168.85.2 localhost minikube pause-908773]
	I1205 07:45:17.321019  657081 provision.go:177] copyRemoteCerts
	I1205 07:45:17.321118  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 07:45:17.321181  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:17.348773  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:17.454271  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 07:45:17.472792  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 07:45:17.489837  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 07:45:17.510134  657081 provision.go:87] duration metric: took 603.079484ms to configureAuth
	I1205 07:45:17.510165  657081 ubuntu.go:206] setting minikube options for container-runtime
	I1205 07:45:17.510470  657081 config.go:182] Loaded profile config "pause-908773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:45:17.510593  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:17.528626  657081 main.go:143] libmachine: Using SSH client type: native
	I1205 07:45:17.528947  657081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1205 07:45:17.528968  657081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 07:45:22.904821  657081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 07:45:22.904847  657081 machine.go:97] duration metric: took 6.523426186s to provisionDockerMachine
	I1205 07:45:22.904858  657081 start.go:293] postStartSetup for "pause-908773" (driver="docker")
	I1205 07:45:22.904869  657081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 07:45:22.904929  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 07:45:22.904991  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:22.923596  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.026462  657081 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 07:45:23.030061  657081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 07:45:23.030092  657081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1205 07:45:23.030104  657081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/addons for local assets ...
	I1205 07:45:23.030203  657081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-441321/.minikube/files for local assets ...
	I1205 07:45:23.030289  657081 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem -> 4441472.pem in /etc/ssl/certs
	I1205 07:45:23.030428  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 07:45:23.038278  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:45:23.058226  657081 start.go:296] duration metric: took 153.352439ms for postStartSetup
	I1205 07:45:23.058350  657081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:45:23.058429  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:23.080748  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.183601  657081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 07:45:23.188773  657081 fix.go:56] duration metric: took 6.82883206s for fixHost
	I1205 07:45:23.188802  657081 start.go:83] releasing machines lock for "pause-908773", held for 6.828883573s
	I1205 07:45:23.188872  657081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-908773
	I1205 07:45:23.206595  657081 ssh_runner.go:195] Run: cat /version.json
	I1205 07:45:23.206650  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:23.206657  657081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 07:45:23.206718  657081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-908773
	I1205 07:45:23.232844  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.232953  657081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/pause-908773/id_rsa Username:docker}
	I1205 07:45:23.424708  657081 ssh_runner.go:195] Run: systemctl --version
	I1205 07:45:23.431192  657081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 07:45:23.473711  657081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 07:45:23.478138  657081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 07:45:23.478276  657081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 07:45:23.486308  657081 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 07:45:23.486334  657081 start.go:496] detecting cgroup driver to use...
	I1205 07:45:23.486365  657081 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 07:45:23.486447  657081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 07:45:23.502006  657081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 07:45:23.515547  657081 docker.go:218] disabling cri-docker service (if available) ...
	I1205 07:45:23.515642  657081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 07:45:23.531113  657081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 07:45:23.544093  657081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 07:45:23.679248  657081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 07:45:23.825252  657081 docker.go:234] disabling docker service ...
	I1205 07:45:23.825378  657081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 07:45:23.840916  657081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 07:45:23.854264  657081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 07:45:23.990216  657081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 07:45:24.164406  657081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 07:45:24.178270  657081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 07:45:24.194552  657081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1205 07:45:24.194621  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.204215  657081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 07:45:24.204294  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.213101  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.222025  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.231692  657081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 07:45:24.240040  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.248869  657081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.257522  657081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 07:45:24.267732  657081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 07:45:24.275323  657081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 07:45:24.282777  657081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:24.419694  657081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 07:45:24.641142  657081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 07:45:24.641230  657081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 07:45:24.645450  657081 start.go:564] Will wait 60s for crictl version
	I1205 07:45:24.645571  657081 ssh_runner.go:195] Run: which crictl
	I1205 07:45:24.649286  657081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1205 07:45:24.673638  657081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1205 07:45:24.673721  657081 ssh_runner.go:195] Run: crio --version
	I1205 07:45:24.701821  657081 ssh_runner.go:195] Run: crio --version
	I1205 07:45:24.735394  657081 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1205 07:45:24.738286  657081 cli_runner.go:164] Run: docker network inspect pause-908773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 07:45:24.755052  657081 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 07:45:24.759107  657081 kubeadm.go:884] updating cluster {Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 07:45:24.759250  657081 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1205 07:45:24.759320  657081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:24.813019  657081 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:45:24.813045  657081 crio.go:433] Images already preloaded, skipping extraction
	I1205 07:45:24.813103  657081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 07:45:24.840960  657081 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 07:45:24.840984  657081 cache_images.go:86] Images are preloaded, skipping loading
	I1205 07:45:24.840992  657081 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1205 07:45:24.841119  657081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-908773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 07:45:24.841251  657081 ssh_runner.go:195] Run: crio config
	I1205 07:45:24.908482  657081 cni.go:84] Creating CNI manager for ""
	I1205 07:45:24.908548  657081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 07:45:24.908586  657081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1205 07:45:24.908634  657081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-908773 NodeName:pause-908773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 07:45:24.908810  657081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-908773"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 07:45:24.908929  657081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1205 07:45:24.916930  657081 binaries.go:51] Found k8s binaries, skipping transfer
	I1205 07:45:24.917006  657081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 07:45:24.925030  657081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1205 07:45:24.938059  657081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 07:45:24.951613  657081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1205 07:45:24.964930  657081 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 07:45:24.968782  657081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:25.107038  657081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:25.123203  657081 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773 for IP: 192.168.85.2
	I1205 07:45:25.123228  657081 certs.go:195] generating shared ca certs ...
	I1205 07:45:25.123244  657081 certs.go:227] acquiring lock for ca certs: {Name:mk2b2b044267ad2ba0bf7f07ba3063fb33694d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:25.123414  657081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key
	I1205 07:45:25.123474  657081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key
	I1205 07:45:25.123487  657081 certs.go:257] generating profile certs ...
	I1205 07:45:25.123595  657081 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.key
	I1205 07:45:25.123674  657081 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/apiserver.key.c3fe9f90
	I1205 07:45:25.123731  657081 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/proxy-client.key
	I1205 07:45:25.123854  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem (1338 bytes)
	W1205 07:45:25.123892  657081 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147_empty.pem, impossibly tiny 0 bytes
	I1205 07:45:25.123905  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 07:45:25.123933  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/ca.pem (1082 bytes)
	I1205 07:45:25.123960  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/cert.pem (1123 bytes)
	I1205 07:45:25.123991  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/certs/key.pem (1675 bytes)
	I1205 07:45:25.124043  657081 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem (1708 bytes)
	I1205 07:45:25.124732  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 07:45:25.147411  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 07:45:25.167942  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 07:45:25.187536  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 07:45:25.206338  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 07:45:25.225260  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 07:45:25.248759  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 07:45:25.268702  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 07:45:25.286505  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 07:45:25.304903  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/certs/444147.pem --> /usr/share/ca-certificates/444147.pem (1338 bytes)
	I1205 07:45:25.323644  657081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/ssl/certs/4441472.pem --> /usr/share/ca-certificates/4441472.pem (1708 bytes)
	I1205 07:45:25.341824  657081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 07:45:25.355715  657081 ssh_runner.go:195] Run: openssl version
	I1205 07:45:25.362447  657081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.370299  657081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1205 07:45:25.378577  657081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.382606  657081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 06:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.382759  657081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 07:45:25.424190  657081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1205 07:45:25.431983  657081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.439524  657081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/444147.pem /etc/ssl/certs/444147.pem
	I1205 07:45:25.449442  657081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.453410  657081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 06:31 /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.453498  657081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/444147.pem
	I1205 07:45:25.495114  657081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1205 07:45:25.503179  657081 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.511459  657081 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4441472.pem /etc/ssl/certs/4441472.pem
	I1205 07:45:25.519648  657081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.524015  657081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 06:31 /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.524091  657081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4441472.pem
	I1205 07:45:25.567512  657081 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1205 07:45:25.575141  657081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 07:45:25.580755  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 07:45:25.623849  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 07:45:25.666077  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 07:45:25.712208  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 07:45:25.755444  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 07:45:25.796193  657081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 07:45:25.837004  657081 kubeadm.go:401] StartCluster: {Name:pause-908773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-908773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:45:25.837121  657081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 07:45:25.837191  657081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 07:45:25.864767  657081 cri.go:89] found id: "a2d60339bcc9865d682e8362f647ffc1c462b4e38c92889a3d0cd42a17e90ea1"
	I1205 07:45:25.864791  657081 cri.go:89] found id: "d68733d7f972f0ded1c6971d99d99c7001a5e4f6aa595c25786f1f8bfe47cb97"
	I1205 07:45:25.864796  657081 cri.go:89] found id: "ac3860d5f81074887386ff7e8edda702dca82f8d5e9082dbc0dfd651a6eea9e7"
	I1205 07:45:25.864803  657081 cri.go:89] found id: "497dce47fc69651d8415137bdc4063272942fd82f265e98f60f6540dbc963f2f"
	I1205 07:45:25.864806  657081 cri.go:89] found id: "00c4de126b652c2eeabcc39993379dbbb81479999412ec63e2f1384ef3779896"
	I1205 07:45:25.864810  657081 cri.go:89] found id: "fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2"
	I1205 07:45:25.864813  657081 cri.go:89] found id: "9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623"
	I1205 07:45:25.864816  657081 cri.go:89] found id: ""
	I1205 07:45:25.864874  657081 ssh_runner.go:195] Run: sudo runc list -f json
	W1205 07:45:25.879561  657081 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T07:45:25Z" level=error msg="open /run/runc: no such file or directory"
	I1205 07:45:25.879647  657081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 07:45:25.887702  657081 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1205 07:45:25.887721  657081 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1205 07:45:25.887805  657081 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 07:45:25.895227  657081 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 07:45:25.895909  657081 kubeconfig.go:125] found "pause-908773" server: "https://192.168.85.2:8443"
	I1205 07:45:25.896752  657081 kapi.go:59] client config for pause-908773: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 07:45:25.897261  657081 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1205 07:45:25.897283  657081 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 07:45:25.897290  657081 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1205 07:45:25.897299  657081 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 07:45:25.897303  657081 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1205 07:45:25.897596  657081 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 07:45:25.905335  657081 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 07:45:25.905376  657081 kubeadm.go:602] duration metric: took 17.648638ms to restartPrimaryControlPlane
	I1205 07:45:25.905386  657081 kubeadm.go:403] duration metric: took 68.392432ms to StartCluster
	I1205 07:45:25.905421  657081 settings.go:142] acquiring lock: {Name:mkda623ae19e2da5d8a248b9335f2c17977f458f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:25.905497  657081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:45:25.906317  657081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/kubeconfig: {Name:mk858e93f2db72aff3248723772b84583917c586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 07:45:25.906633  657081 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 07:45:25.906978  657081 config.go:182] Loaded profile config "pause-908773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:45:25.907030  657081 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 07:45:25.911186  657081 out.go:179] * Verifying Kubernetes components...
	I1205 07:45:25.911186  657081 out.go:179] * Enabled addons: 
	I1205 07:45:25.914058  657081 addons.go:530] duration metric: took 7.023307ms for enable addons: enabled=[]
	I1205 07:45:25.914101  657081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 07:45:26.041447  657081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 07:45:26.056480  657081 node_ready.go:35] waiting up to 6m0s for node "pause-908773" to be "Ready" ...
	I1205 07:45:31.216626  657081 node_ready.go:49] node "pause-908773" is "Ready"
	I1205 07:45:31.216654  657081 node_ready.go:38] duration metric: took 5.160124588s for node "pause-908773" to be "Ready" ...
	I1205 07:45:31.216670  657081 api_server.go:52] waiting for apiserver process to appear ...
	I1205 07:45:31.216732  657081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:45:31.235916  657081 api_server.go:72] duration metric: took 5.329243197s to wait for apiserver process to appear ...
	I1205 07:45:31.235939  657081 api_server.go:88] waiting for apiserver healthz status ...
	I1205 07:45:31.235960  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:31.286548  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:45:31.286625  657081 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:45:31.736157  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:31.745167  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:45:31.745314  657081 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:45:32.236495  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:32.247993  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 07:45:32.248072  657081 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 07:45:32.736678  657081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1205 07:45:32.745888  657081 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1205 07:45:32.747571  657081 api_server.go:141] control plane version: v1.34.2
	I1205 07:45:32.747641  657081 api_server.go:131] duration metric: took 1.51169458s to wait for apiserver health ...
	I1205 07:45:32.747666  657081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 07:45:32.752995  657081 system_pods.go:59] 7 kube-system pods found
	I1205 07:45:32.753034  657081 system_pods.go:61] "coredns-66bc5c9577-kc28g" [e5a81df8-535c-43be-9ffc-f298e707b2d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:45:32.753043  657081 system_pods.go:61] "etcd-pause-908773" [db7ad305-cb33-4abd-841a-4d6e331f2dfb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:45:32.753048  657081 system_pods.go:61] "kindnet-56nmf" [f3318b0d-6053-43e1-a02a-88240d3a4e98] Running
	I1205 07:45:32.753054  657081 system_pods.go:61] "kube-apiserver-pause-908773" [63ab9c97-d105-4f56-8d07-5ee051d1ff31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:45:32.753065  657081 system_pods.go:61] "kube-controller-manager-pause-908773" [60682c22-3323-43ad-b446-ee30dd08a77a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:45:32.753069  657081 system_pods.go:61] "kube-proxy-jszzp" [4bd9a8f2-e90f-4ba9-991e-7647e3b203bc] Running
	I1205 07:45:32.753082  657081 system_pods.go:61] "kube-scheduler-pause-908773" [14c8c36c-b499-4498-8b88-1c64d27a951b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:45:32.753091  657081 system_pods.go:74] duration metric: took 5.404416ms to wait for pod list to return data ...
	I1205 07:45:32.753099  657081 default_sa.go:34] waiting for default service account to be created ...
	I1205 07:45:32.756403  657081 default_sa.go:45] found service account: "default"
	I1205 07:45:32.756423  657081 default_sa.go:55] duration metric: took 3.317309ms for default service account to be created ...
	I1205 07:45:32.756432  657081 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 07:45:32.759864  657081 system_pods.go:86] 7 kube-system pods found
	I1205 07:45:32.759938  657081 system_pods.go:89] "coredns-66bc5c9577-kc28g" [e5a81df8-535c-43be-9ffc-f298e707b2d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 07:45:32.759961  657081 system_pods.go:89] "etcd-pause-908773" [db7ad305-cb33-4abd-841a-4d6e331f2dfb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 07:45:32.759982  657081 system_pods.go:89] "kindnet-56nmf" [f3318b0d-6053-43e1-a02a-88240d3a4e98] Running
	I1205 07:45:32.760021  657081 system_pods.go:89] "kube-apiserver-pause-908773" [63ab9c97-d105-4f56-8d07-5ee051d1ff31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 07:45:32.760046  657081 system_pods.go:89] "kube-controller-manager-pause-908773" [60682c22-3323-43ad-b446-ee30dd08a77a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 07:45:32.760064  657081 system_pods.go:89] "kube-proxy-jszzp" [4bd9a8f2-e90f-4ba9-991e-7647e3b203bc] Running
	I1205 07:45:32.760097  657081 system_pods.go:89] "kube-scheduler-pause-908773" [14c8c36c-b499-4498-8b88-1c64d27a951b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 07:45:32.760121  657081 system_pods.go:126] duration metric: took 3.68144ms to wait for k8s-apps to be running ...
	I1205 07:45:32.760173  657081 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 07:45:32.760279  657081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:45:32.774267  657081 system_svc.go:56] duration metric: took 14.119093ms WaitForService to wait for kubelet
	I1205 07:45:32.774343  657081 kubeadm.go:587] duration metric: took 6.867672886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 07:45:32.774432  657081 node_conditions.go:102] verifying NodePressure condition ...
	I1205 07:45:32.780597  657081 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 07:45:32.780672  657081 node_conditions.go:123] node cpu capacity is 2
	I1205 07:45:32.780698  657081 node_conditions.go:105] duration metric: took 6.240594ms to run NodePressure ...
	I1205 07:45:32.780722  657081 start.go:242] waiting for startup goroutines ...
	I1205 07:45:32.780756  657081 start.go:247] waiting for cluster config update ...
	I1205 07:45:32.780781  657081 start.go:256] writing updated cluster config ...
	I1205 07:45:32.781138  657081 ssh_runner.go:195] Run: rm -f paused
	I1205 07:45:32.785205  657081 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:45:32.785951  657081 kapi.go:59] client config for pause-908773: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/profiles/pause-908773/client.key", CAFile:"/home/jenkins/minikube-integration/21997-441321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 07:45:32.791388  657081 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kc28g" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:45:34.797445  657081 pod_ready.go:104] pod "coredns-66bc5c9577-kc28g" is not "Ready", error: <nil>
	W1205 07:45:36.798495  657081 pod_ready.go:104] pod "coredns-66bc5c9577-kc28g" is not "Ready", error: <nil>
	I1205 07:45:38.797243  657081 pod_ready.go:94] pod "coredns-66bc5c9577-kc28g" is "Ready"
	I1205 07:45:38.797273  657081 pod_ready.go:86] duration metric: took 6.00581352s for pod "coredns-66bc5c9577-kc28g" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:38.799825  657081 pod_ready.go:83] waiting for pod "etcd-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:38.804483  657081 pod_ready.go:94] pod "etcd-pause-908773" is "Ready"
	I1205 07:45:38.804513  657081 pod_ready.go:86] duration metric: took 4.662942ms for pod "etcd-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:38.806934  657081 pod_ready.go:83] waiting for pod "kube-apiserver-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	W1205 07:45:40.811954  657081 pod_ready.go:104] pod "kube-apiserver-pause-908773" is not "Ready", error: <nil>
	W1205 07:45:42.813460  657081 pod_ready.go:104] pod "kube-apiserver-pause-908773" is not "Ready", error: <nil>
	I1205 07:45:45.312638  657081 pod_ready.go:94] pod "kube-apiserver-pause-908773" is "Ready"
	I1205 07:45:45.312668  657081 pod_ready.go:86] duration metric: took 6.505698922s for pod "kube-apiserver-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.315462  657081 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.320235  657081 pod_ready.go:94] pod "kube-controller-manager-pause-908773" is "Ready"
	I1205 07:45:45.320307  657081 pod_ready.go:86] duration metric: took 4.820999ms for pod "kube-controller-manager-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.322976  657081 pod_ready.go:83] waiting for pod "kube-proxy-jszzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.328127  657081 pod_ready.go:94] pod "kube-proxy-jszzp" is "Ready"
	I1205 07:45:45.328157  657081 pod_ready.go:86] duration metric: took 5.15309ms for pod "kube-proxy-jszzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.330766  657081 pod_ready.go:83] waiting for pod "kube-scheduler-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.510956  657081 pod_ready.go:94] pod "kube-scheduler-pause-908773" is "Ready"
	I1205 07:45:45.510989  657081 pod_ready.go:86] duration metric: took 180.196818ms for pod "kube-scheduler-pause-908773" in "kube-system" namespace to be "Ready" or be gone ...
	I1205 07:45:45.511002  657081 pod_ready.go:40] duration metric: took 12.725722599s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1205 07:45:45.564247  657081 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1205 07:45:45.567351  657081 out.go:179] * Done! kubectl is now configured to use "pause-908773" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.444575401Z" level=info msg="Started container" PID=2348 containerID=c1f9ec37476718129f8a3378815dbe24785ceca8224b5337bb73412c8c2f5294 description=kube-system/coredns-66bc5c9577-kc28g/coredns id=afd3dfd8-7f53-4d47-b7f7-449740855db7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=20b2ab50e7e7bae4ddbd49ccf4ca253674ed27a728d1a63356bc9de41997b201
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.446342365Z" level=info msg="Starting container: 1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b" id=b64c6347-9fb7-47f1-bf54-e099e53e8cf3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.447430813Z" level=info msg="Created container bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3: kube-system/etcd-pause-908773/etcd" id=ef65bc48-e20f-4e83-b5a1-8e1deba0d370 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.45103816Z" level=info msg="Started container" PID=2353 containerID=d094a8686e5ba0f3ec3a732e3a223076a61da4f5dd90c62847f3e01829326cb3 description=kube-system/kindnet-56nmf/kindnet-cni id=09bcb047-60fd-44e2-ab4f-b2c12fa420fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c2044a22dc9dc9db7d082952234955f745fa8e0dc53fd19c8b5c5208e8a8229
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.458600478Z" level=info msg="Starting container: bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3" id=24479ec0-ad28-4de3-a9a0-cc80f3cf3291 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.463044718Z" level=info msg="Started container" PID=2367 containerID=1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b description=kube-system/kube-proxy-jszzp/kube-proxy id=b64c6347-9fb7-47f1-bf54-e099e53e8cf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff955faf03a3c53199467294dd8d1c3104b67229f6272fdfe53f54c496937e88
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.475959184Z" level=info msg="Started container" PID=2368 containerID=bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3 description=kube-system/etcd-pause-908773/etcd id=24479ec0-ad28-4de3-a9a0-cc80f3cf3291 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a29fcda1b070339ace0bf1c4bccbd0062ebc7ee90a752d7e8ac5f53b85e5a8b
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.503868225Z" level=info msg="Created container 42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6: kube-system/kube-scheduler-pause-908773/kube-scheduler" id=db60ff9c-89ba-4797-9d91-94da7ed4f660 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.504453225Z" level=info msg="Starting container: 42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6" id=888bca54-f4d1-4ab6-917d-5d675ee7a14e name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 07:45:26 pause-908773 crio[2065]: time="2025-12-05T07:45:26.506903299Z" level=info msg="Started container" PID=2395 containerID=42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6 description=kube-system/kube-scheduler-pause-908773/kube-scheduler id=888bca54-f4d1-4ab6-917d-5d675ee7a14e name=/runtime.v1.RuntimeService/StartContainer sandboxID=973d4809a74e772499404dbedea7d344fb73c689cfbd1d31ce868751f3fbc238
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.824271555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.828066622Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.82810567Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.828127209Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.832190735Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.832353395Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.832425831Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.835675882Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.835820031Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.835917723Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.840067083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.84023031Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.840314011Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.843481205Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 05 07:45:36 pause-908773 crio[2065]: time="2025-12-05T07:45:36.843516504Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	42a3834bb006a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   24 seconds ago       Running             kube-scheduler            1                   973d4809a74e7       kube-scheduler-pause-908773            kube-system
	bd881eebfd2b3       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   24 seconds ago       Running             etcd                      1                   0a29fcda1b070       etcd-pause-908773                      kube-system
	1c4173a6bf2a1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   24 seconds ago       Running             kube-proxy                1                   ff955faf03a3c       kube-proxy-jszzp                       kube-system
	c1f9ec3747671       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   20b2ab50e7e7b       coredns-66bc5c9577-kc28g               kube-system
	d094a8686e5ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   3c2044a22dc9d       kindnet-56nmf                          kube-system
	85fa8e67c2256       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   24 seconds ago       Running             kube-controller-manager   1                   5b107176fe0a1       kube-controller-manager-pause-908773   kube-system
	b0092a97b6fe6       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   2e6fb8838b99d       kube-apiserver-pause-908773            kube-system
	a2d60339bcc98       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   20b2ab50e7e7b       coredns-66bc5c9577-kc28g               kube-system
	d68733d7f972f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   3c2044a22dc9d       kindnet-56nmf                          kube-system
	ac3860d5f8107       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   ff955faf03a3c       kube-proxy-jszzp                       kube-system
	497dce47fc696       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   2e6fb8838b99d       kube-apiserver-pause-908773            kube-system
	00c4de126b652       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   5b107176fe0a1       kube-controller-manager-pause-908773   kube-system
	fd1160cc4bfca       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   0a29fcda1b070       etcd-pause-908773                      kube-system
	9a3d6d291d5b0       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   973d4809a74e7       kube-scheduler-pause-908773            kube-system
	
	
	==> coredns [a2d60339bcc9865d682e8362f647ffc1c462b4e38c92889a3d0cd42a17e90ea1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35040 - 55957 "HINFO IN 8925664347791367152.3814835325500385008. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021717407s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c1f9ec37476718129f8a3378815dbe24785ceca8224b5337bb73412c8c2f5294] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57994 - 49934 "HINFO IN 8513627343155943591.3042283095642630719. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02604761s
	
	
	==> describe nodes <==
	Name:               pause-908773
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-908773
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d53406164b08000276c1d84507c3250851dada45
	                    minikube.k8s.io/name=pause-908773
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_05T07_44_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 05 Dec 2025 07:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-908773
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 05 Dec 2025 07:45:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:44:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:44:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:44:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 05 Dec 2025 07:45:13 +0000   Fri, 05 Dec 2025 07:45:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-908773
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                d64ce5e1-7c0e-4070-8226-19bec79dffeb
	  Boot ID:                    6438d548-ea0a-487b-93bc-8af12c014d83
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-kc28g                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-908773                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-56nmf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-908773             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-908773    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-jszzp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-908773             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 77s   kube-proxy       
	  Normal   Starting                 18s   kube-proxy       
	  Normal   Starting                 84s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s   kubelet          Node pause-908773 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s   kubelet          Node pause-908773 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s   kubelet          Node pause-908773 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s   node-controller  Node pause-908773 event: Registered Node pause-908773 in Controller
	  Normal   NodeReady                37s   kubelet          Node pause-908773 status is now: NodeReady
	  Normal   RegisteredNode           16s   node-controller  Node pause-908773 event: Registered Node pause-908773 in Controller
	
	
	==> dmesg <==
	[ +33.737398] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:10] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:11] overlayfs: idmapped layers are currently not supported
	[  +3.073089] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:12] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:13] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:14] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:19] overlayfs: idmapped layers are currently not supported
	[ +33.161652] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:21] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:22] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:23] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:24] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:25] overlayfs: idmapped layers are currently not supported
	[ +19.047599] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:26] overlayfs: idmapped layers are currently not supported
	[ +16.337115] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:27] overlayfs: idmapped layers are currently not supported
	[ +25.534355] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:28] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:30] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:32] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:33] overlayfs: idmapped layers are currently not supported
	[ +28.256020] overlayfs: idmapped layers are currently not supported
	[Dec 5 07:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bd881eebfd2b308dfefda4c7d23db2f2b39ee5cc99a89c4a5f5924e6aeeabbc3] <==
	{"level":"warn","ts":"2025-12-05T07:45:28.765734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:28.814759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:28.886763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:28.952337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.000779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.039846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.060048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.076442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.115260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.174597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.216727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.227389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.276564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.284790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.295112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.329401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.378701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.395050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.423217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.440539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.474686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.522238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.542906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.556957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:45:29.695294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	
	
	==> etcd [fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2] <==
	{"level":"warn","ts":"2025-12-05T07:44:23.436405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.453721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.471703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.496002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.519138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.539000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-05T07:44:23.635989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41496","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-05T07:45:17.699806Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-05T07:45:17.699847Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-908773","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-05T07:45:17.699932Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T07:45:17.845807Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-05T07:45:17.847336Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T07:45:17.847389Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847411Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-05T07:45:17.847451Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847459Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-05T07:45:17.847466Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-12-05T07:45:17.847469Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847510Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-05T07:45:17.847523Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-05T07:45:17.847529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T07:45:17.851004Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-05T07:45:17.851088Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-05T07:45:17.851121Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-05T07:45:17.851137Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-908773","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 07:45:50 up  4:27,  0 user,  load average: 2.70, 1.77, 1.85
	Linux pause-908773 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d094a8686e5ba0f3ec3a732e3a223076a61da4f5dd90c62847f3e01829326cb3] <==
	I1205 07:45:26.619303       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:45:26.619705       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1205 07:45:26.619886       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:45:26.619932       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:45:26.619966       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:45:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:45:26.823370       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:45:26.830418       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:45:26.830485       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:45:26.832770       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1205 07:45:31.434451       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:45:31.434497       1 metrics.go:72] Registering metrics
	I1205 07:45:31.434577       1 controller.go:711] "Syncing nftables rules"
	I1205 07:45:36.823892       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:45:36.823941       1 main.go:301] handling current node
	I1205 07:45:46.823743       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:45:46.823773       1 main.go:301] handling current node
	
	
	==> kindnet [d68733d7f972f0ded1c6971d99d99c7001a5e4f6aa595c25786f1f8bfe47cb97] <==
	I1205 07:44:33.022172       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 07:44:33.022570       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1205 07:44:33.022690       1 main.go:148] setting mtu 1500 for CNI 
	I1205 07:44:33.022702       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 07:44:33.022716       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-05T07:44:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1205 07:44:33.227484       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1205 07:44:33.227503       1 controller.go:381] "Waiting for informer caches to sync"
	I1205 07:44:33.227512       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1205 07:44:33.227885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1205 07:45:03.227154       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1205 07:45:03.227167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1205 07:45:03.227274       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1205 07:45:03.228638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1205 07:45:04.628374       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1205 07:45:04.628407       1 metrics.go:72] Registering metrics
	I1205 07:45:04.628470       1 controller.go:711] "Syncing nftables rules"
	I1205 07:45:13.232379       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1205 07:45:13.232474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [497dce47fc69651d8415137bdc4063272942fd82f265e98f60f6540dbc963f2f] <==
	W1205 07:45:17.715551       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.715947       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.715977       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716003       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716027       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716052       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716078       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716107       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716131       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716155       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716179       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716204       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716229       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.716256       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717748       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717775       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717797       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.717818       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718030       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718070       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718560       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718586       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718612       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 07:45:17.718926       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b0092a97b6fe65c8ffaef671961badb67a3e4d6d62d80c42d2051994069dcaae] <==
	I1205 07:45:31.203821       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 07:45:31.203840       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 07:45:31.237281       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1205 07:45:31.270212       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1205 07:45:31.270269       1 policy_source.go:240] refreshing policies
	I1205 07:45:31.271000       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1205 07:45:31.272868       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 07:45:31.279606       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1205 07:45:31.280038       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 07:45:31.280072       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 07:45:31.281690       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1205 07:45:31.281728       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 07:45:31.296239       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1205 07:45:31.296305       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1205 07:45:31.302561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1205 07:45:31.304143       1 cache.go:39] Caches are synced for autoregister controller
	I1205 07:45:31.383390       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 07:45:31.396528       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 07:45:31.403131       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1205 07:45:31.720924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 07:45:32.833930       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1205 07:45:34.284639       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 07:45:34.384241       1 controller.go:667] quota admission added evaluator for: endpoints
	I1205 07:45:34.438533       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1205 07:45:34.538233       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [00c4de126b652c2eeabcc39993379dbbb81479999412ec63e2f1384ef3779896] <==
	I1205 07:44:31.446882       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1205 07:44:31.451529       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1205 07:44:31.458928       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1205 07:44:31.478402       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:44:31.478509       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1205 07:44:31.478629       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 07:44:31.478677       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1205 07:44:31.478522       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1205 07:44:31.478645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1205 07:44:31.478736       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:44:31.481810       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 07:44:31.481868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:44:31.490080       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1205 07:44:31.492318       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:44:31.492432       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:44:31.492513       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-908773"
	I1205 07:44:31.492575       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1205 07:44:31.498852       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1205 07:44:31.517281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1205 07:44:31.526045       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:44:31.527192       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:44:31.527306       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:44:31.527342       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 07:44:31.527356       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 07:45:16.499115       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [85fa8e67c22567024fa7b705e2ed7964a9445424ae54ee1525dc8ba5fb4a3a6a] <==
	I1205 07:45:34.144977       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1205 07:45:34.151342       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1205 07:45:34.153608       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1205 07:45:34.156853       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1205 07:45:34.159098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1205 07:45:34.162403       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1205 07:45:34.170607       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1205 07:45:34.171779       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1205 07:45:34.175015       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1205 07:45:34.176232       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:45:34.176327       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1205 07:45:34.177464       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1205 07:45:34.177500       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1205 07:45:34.177576       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1205 07:45:34.177892       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1205 07:45:34.178007       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 07:45:34.178096       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1205 07:45:34.178152       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1205 07:45:34.178145       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-908773"
	I1205 07:45:34.178334       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 07:45:34.178438       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1205 07:45:34.185006       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1205 07:45:34.185027       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1205 07:45:34.185034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 07:45:34.187874       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [1c4173a6bf2a12bd66bf1109f1a692487d03109a7b5f6476ac9d97145f732f2b] <==
	I1205 07:45:26.591013       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:45:27.579490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:45:31.394446       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:45:31.410459       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1205 07:45:31.449129       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:45:32.055528       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:45:32.055646       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:45:32.125591       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:45:32.125920       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:45:32.134500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:45:32.144122       1 config.go:200] "Starting service config controller"
	I1205 07:45:32.144210       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:45:32.144251       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:45:32.144310       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:45:32.144346       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:45:32.144372       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:45:32.146159       1 config.go:309] "Starting node config controller"
	I1205 07:45:32.173026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:45:32.173127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:45:32.249541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:45:32.257360       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1205 07:45:32.257718       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ac3860d5f81074887386ff7e8edda702dca82f8d5e9082dbc0dfd651a6eea9e7] <==
	I1205 07:44:33.041022       1 server_linux.go:53] "Using iptables proxy"
	I1205 07:44:33.200314       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1205 07:44:33.301158       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1205 07:44:33.301200       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1205 07:44:33.301298       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 07:44:33.365055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 07:44:33.365200       1 server_linux.go:132] "Using iptables Proxier"
	I1205 07:44:33.370993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 07:44:33.371404       1 server.go:527] "Version info" version="v1.34.2"
	I1205 07:44:33.371600       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:44:33.372958       1 config.go:200] "Starting service config controller"
	I1205 07:44:33.373017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1205 07:44:33.373056       1 config.go:106] "Starting endpoint slice config controller"
	I1205 07:44:33.373082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1205 07:44:33.373132       1 config.go:403] "Starting serviceCIDR config controller"
	I1205 07:44:33.373173       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1205 07:44:33.373833       1 config.go:309] "Starting node config controller"
	I1205 07:44:33.373888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1205 07:44:33.373927       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1205 07:44:33.474094       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1205 07:44:33.474125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1205 07:44:33.474149       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [42a3834bb006a7f9cf6c955bf2119b4739235484cdb95d9b5baa3068103d1ce6] <==
	I1205 07:45:28.877645       1 serving.go:386] Generated self-signed cert in-memory
	I1205 07:45:31.980367       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1205 07:45:31.991008       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 07:45:32.017861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1205 07:45:32.018958       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1205 07:45:32.019050       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1205 07:45:32.019137       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 07:45:32.021330       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:32.035366       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:32.026462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 07:45:32.035843       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 07:45:32.123448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1205 07:45:32.136646       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:32.136596       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623] <==
	E1205 07:44:25.175499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1205 07:44:25.175586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1205 07:44:25.175638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1205 07:44:25.175702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1205 07:44:25.175951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1205 07:44:25.176117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1205 07:44:25.176259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1205 07:44:25.176458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1205 07:44:25.178561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1205 07:44:25.178767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1205 07:44:25.178881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1205 07:44:25.179134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1205 07:44:25.179200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1205 07:44:25.179260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1205 07:44:25.179328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1205 07:44:25.179409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1205 07:44:25.179471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1205 07:44:25.179540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1205 07:44:26.561078       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:17.696604       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1205 07:45:17.696631       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1205 07:45:17.696655       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1205 07:45:17.696680       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 07:45:17.696805       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1205 07:45:17.696818       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.284403    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="04c7812fd289db76f39fbc2b2cae5a9f" pod="kube-system/kube-controller-manager-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.284533    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0e094c2614ecc034efbe45f620b6d31a" pod="kube-system/kube-scheduler-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: I1205 07:45:26.291035    1302 scope.go:117] "RemoveContainer" containerID="fd1160cc4bfcaa8f1f379d4537a127a12cefacf1cf542cad94589a8c6b50efa2"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.291667    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb30420b19ae6f4e6ba10fe4524a0bed" pod="kube-system/kube-apiserver-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.292055    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f8027ba6b18dd5d42b44eb6d3c739a7" pod="kube-system/etcd-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.292408    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="04c7812fd289db76f39fbc2b2cae5a9f" pod="kube-system/kube-controller-manager-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.300193    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0e094c2614ecc034efbe45f620b6d31a" pod="kube-system/kube-scheduler-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.304597    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-56nmf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f3318b0d-6053-43e1-a02a-88240d3a4e98" pod="kube-system/kindnet-56nmf"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.305020    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jszzp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4bd9a8f2-e90f-4ba9-991e-7647e3b203bc" pod="kube-system/kube-proxy-jszzp"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.305368    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-kc28g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5a81df8-535c-43be-9ffc-f298e707b2d5" pod="kube-system/coredns-66bc5c9577-kc28g"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: I1205 07:45:26.310589    1302 scope.go:117] "RemoveContainer" containerID="9a3d6d291d5b03fb8cf99a7e65b88f2e2875a86dd29b10e3b4edb33e80141623"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.311184    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f8027ba6b18dd5d42b44eb6d3c739a7" pod="kube-system/etcd-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.311702    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="04c7812fd289db76f39fbc2b2cae5a9f" pod="kube-system/kube-controller-manager-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.312832    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0e094c2614ecc034efbe45f620b6d31a" pod="kube-system/kube-scheduler-pause-908773"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.317913    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-56nmf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f3318b0d-6053-43e1-a02a-88240d3a4e98" pod="kube-system/kindnet-56nmf"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.322132    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jszzp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4bd9a8f2-e90f-4ba9-991e-7647e3b203bc" pod="kube-system/kube-proxy-jszzp"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.322364    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-kc28g\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5a81df8-535c-43be-9ffc-f298e707b2d5" pod="kube-system/coredns-66bc5c9577-kc28g"
	Dec 05 07:45:26 pause-908773 kubelet[1302]: E1205 07:45:26.322604    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-908773\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb30420b19ae6f4e6ba10fe4524a0bed" pod="kube-system/kube-apiserver-pause-908773"
	Dec 05 07:45:30 pause-908773 kubelet[1302]: E1205 07:45:30.899779    1302 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-908773\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-908773' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 05 07:45:30 pause-908773 kubelet[1302]: E1205 07:45:30.900443    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-56nmf\" is forbidden: User \"system:node:pause-908773\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-908773' and this object" podUID="f3318b0d-6053-43e1-a02a-88240d3a4e98" pod="kube-system/kindnet-56nmf"
	Dec 05 07:45:31 pause-908773 kubelet[1302]: E1205 07:45:31.140213    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-jszzp\" is forbidden: User \"system:node:pause-908773\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-908773' and this object" podUID="4bd9a8f2-e90f-4ba9-991e-7647e3b203bc" pod="kube-system/kube-proxy-jszzp"
	Dec 05 07:45:37 pause-908773 kubelet[1302]: W1205 07:45:37.240262    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 05 07:45:46 pause-908773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 05 07:45:46 pause-908773 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 05 07:45:46 pause-908773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-908773 -n pause-908773
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-908773 -n pause-908773: exit status 2 (350.52656ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-908773 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.069s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:08:44.277141  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:08:45.392028  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:08:54.493281  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/old-k8s-version-526185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:10:01.187197  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/default-k8s-diff-port-029291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:10:07.351601  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:10:17.557675  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/old-k8s-version-526185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1205 08:10:39.247067  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (24m31s)
		TestStartStop (27m3s)
		TestStartStop/group/newest-cni (14m24s)
		TestStartStop/group/newest-cni/serial (14m24s)
		TestStartStop/group/newest-cni/serial/SecondStart (4m18s)
		TestStartStop/group/no-preload (20m30s)
		TestStartStop/group/no-preload/serial (20m30s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4m2s)

                                                
                                                
goroutine 5963 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000485180, 0x40002bbbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400071a138, {0x534c580, 0x2c, 0x2c}, {0x40002bbd08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40007961e0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40007961e0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 5479 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001994710, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001994700)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001ee2fc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004658f0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000082310}, 0x400153ff38, {0x369d700, 0x40017e8300}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x40017e8300?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400147dd40, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5476
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 179 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 2515 [IO wait, 99 minutes]:
internal/poll.runtime_pollWait(0xffff44d90c00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001b0a380?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001b0a380)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001b0a380)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40001d1640)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40001d1640)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004ac600, {0x36d31a0, 0x40001d1640})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004ac600)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 2513
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 172 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40004b3500?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 165
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3082 [chan send, 70 minutes]:
os/exec.(*Cmd).watchCtx(0x40004b2c00, 0x400154c770)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3081
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5474 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e57f8, 0x4000465ab0}, {0x36d3800, 0x4001475e80}, 0x1, 0x0, 0x4000953be0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e57f8?, 0x40004a2a80?}, 0x3b9aca00, 0x4000953e08?, 0x1, 0x4000953be0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e57f8, 0x40004a2a80}, 0x40017e2000, {0x40024a22d0, 0x11}, {0x2993fae, 0x14}, {0x29abe5c, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:379 +0x22c
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36e57f8, 0x40004a2a80}, 0x40017e2000, {0x40024a22d0, 0x11}, {0x2978519?, 0x1e9e805500161e84?}, {0x693292a1?, 0x4000ccff58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:272 +0xf8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40017e2000?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40017e2000, 0x4001b0a180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5086
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2410 [select, 99 minutes]:
net/http.(*persistConn).readLoop(0x40014be6c0)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 2408
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 5086 [chan receive, 4 minutes]:
testing.(*T).Run(0x40002e3c00, {0x2999fbb?, 0x40000006ee?}, 0x4001b0a180)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40002e3c00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40002e3c00, 0x400024e480)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4680
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 178 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000082310}, 0x40000d8f40, 0x40000d8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000082310}, 0x0?, 0x40000d8f40, 0x40000d8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000082310?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000740780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 173
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 177 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001994850, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001994840)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40015493e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004ba540?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000082310}, 0x40000d2f38, {0x369d700, 0x40007b4030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369d700?, 0x40007b4030?}, 0x90?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000932320, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 173
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5480 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000082310}, 0x4001412f40, 0x4001412f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000082310}, 0x38?, 0x4001412f40, 0x4001412f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000082310?}, 0x40014d07a8?, 0x40002eb180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000cd4e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5476
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 173 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40015493e0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 165
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5240 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001994b10, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001994b00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40009159e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400154cd90?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000082310?}, 0x40014a46a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000082310}, 0x4000909f38, {0x369d700, 0x4000927200}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40014a47a8?, {0x369d700?, 0x4000927200?}, 0x10?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001d7e720, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5249
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4970 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x4001485880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4966
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 2411 [select, 99 minutes]:
net/http.(*persistConn).writeLoop(0x40014be6c0)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 2408
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 2292 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x40014af800, 0x400196e770)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 773
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4728 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0x40006f8730)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001485180)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001485180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001485180)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001485180, 0x400024e280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4727
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 646 [IO wait, 114 minutes]:
internal/poll.runtime_pollWait(0xffff4520f000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400024e580?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400024e580)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400024e580)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40001d1c00)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40001d1c00)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000104b00, {0x36d31a0, 0x40001d1c00})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000104b00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 644
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4409 [chan receive, 24 minutes]:
testing.(*T).Run(0x40017d5a40, {0x296d53f?, 0xda15effb4b3?}, 0x400198d0f8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40017d5a40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40017d5a40, 0x339b4c8)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 768 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40007244d0, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40007244c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000915320)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400022fc70?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000082310}, 0x4001466f38, {0x369d700, 0x40019df9e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x40019df9e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40007b9710, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 840
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4680 [chan receive, 20 minutes]:
testing.(*T).Run(0x4001484a80, {0x296e9b1?, 0x0?}, 0x400024e480)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001484a80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001484a80, 0x4001994200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4676
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5481 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5480
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 2736 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40017e2000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2699
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4975 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000082310}, 0x40014d2740, 0x400153df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000082310}, 0x52?, 0x40014d2740, 0x40014d2788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000082310?}, 0x0?, 0x40014d2750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x40001bc080?, 0x4001485880?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4971
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 850 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 849
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4676 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001484000, 0x339b6f8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4477
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5476 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001ee2fc0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5474
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 849 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000082310}, 0x400009e740, 0x400090ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000082310}, 0x84?, 0x400009e740, 0x400009e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000082310?}, 0x161f90?, 0x40002e3dc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40015a0300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 840
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 839 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x4001501c00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 838
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 840 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000915320, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 838
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3192 [chan send, 70 minutes]:
os/exec.(*Cmd).watchCtx(0x4000046600, 0x40016c05b0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2690
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4477 [chan receive, 27 minutes]:
testing.(*T).Run(0x4001484380, {0x296d53f?, 0x4001468f58?}, 0x339b6f8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001484380)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001484380, 0x339b510)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4801 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0x40006f8730)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001500700)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001500700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001500700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001500700, 0x4001b0a580)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4727
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4864 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0x40006f8730)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001485c00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001485c00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001485c00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001485c00, 0x400024f080)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4727
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2737 [chan receive, 72 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001e302a0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2699
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5432 [syscall, 4 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x4000cc5b18, 0x4, 0x40007543f0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4000cc5c78?, 0x1929a0?, 0xffffd237c1a1?, 0x0?, 0x400142c340?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4001d14040)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4000cc5c48?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x40019c0000)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x40019c0000)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x4000cd4e00, 0x40019c0000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateSecondStart({0x36e57f8, 0x400022caf0}, 0x4000cd4e00, {0x4001480120, 0x11}, {0x1ae228e7?, 0x1ae228e700161e84?}, {0x69329292?, 0x400153ef58?}, {0x40004ac300?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0x90
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x4000cd4e00?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x4000cd4e00, 0x400024e500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5354
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4727 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001484fc0, 0x400198d0f8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4409
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4678 [chan receive, 14 minutes]:
testing.(*T).Run(0x4001484700, {0x296e9b1?, 0x0?}, 0x4001b0a100)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001484700)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001484700, 0x4001994180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4676
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5475 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40017e2000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5474
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 2191 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0x4000046d80, 0x4000083420)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2190
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3135 [chan send, 70 minutes]:
os/exec.(*Cmd).watchCtx(0x40004b3c80, 0x400154d570)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3134
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5354 [chan receive, 4 minutes]:
testing.(*T).Run(0x4000485880, {0x297a650?, 0x40000006ee?}, 0x400024e500)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4000485880)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4000485880, 0x4001b0a100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4678
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5232 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x40001bc080?}, 0x40019c0600?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5236
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 2742 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2741
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 2741 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000082310}, 0x40000a0f40, 0x4000ccbf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000082310}, 0xa8?, 0x40000a0f40, 0x40000a0f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000082310?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40007871e0?, 0x95c64?, 0x40015a0480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2737
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 2740 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40001d1d10, 0x22)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40001d1d00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001e302a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000438620?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000082310?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000082310}, 0x400146cf38, {0x369d700, 0x400063db30}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x400063db30?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001918f40, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2737
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5241 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000082310}, 0x40014a6740, 0x400090df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000082310}, 0xdd?, 0x40014a6740, 0x40014a6788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000082310?}, 0x0?, 0x40014a6750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x40001bc080?, 0x40019c0600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5249
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4865 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0x40006f8730)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001485dc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001485dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001485dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001485dc0, 0x400024f100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4727
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5249 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40009159e0, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5236
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 2229 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0x40014ae480, 0x40014c5c70)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2228
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5434 [IO wait]:
internal/poll.runtime_pollWait(0xffff44d90000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001c782a0?, 0x4001bc10fd?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001c782a0, {0x4001bc10fd, 0x52f03, 0x52f03})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40004980f8, {0x4001bc10fd?, 0x40014a4568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40007b6450, {0x369bad8, 0x4000722298})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bcc0, 0x40007b6450}, {0x369bad8, 0x4000722298}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40004980f8?, {0x369bcc0, 0x40007b6450})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40004980f8, {0x369bcc0, 0x40007b6450})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bcc0, 0x40007b6450}, {0x369bb58, 0x40004980f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4000740600?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5432
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5242 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5241
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4976 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4975
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5435 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0x40019c0000, 0x40004ba0e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5432
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4974 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000724c90, 0x15)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000724c80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001ee2c00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400154ca10?, 0x36e5788?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000082310?}, 0x296bd45?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000082310}, 0x400140df38, {0x369d700, 0x4000911b00}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369d700?, 0x4000911b00?}, 0x90?, 0x4000046600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001d7f400, 0x3b9aca00, 0x0, 0x1, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4971
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5433 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0xffff4520ee00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001c781e0?, 0x40014fdb62?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001c781e0, {0x40014fdb62, 0x49e, 0x49e})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40004980d8, {0x40014fdb62?, 0x40014d0568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40007b6390, {0x369bad8, 0x4000722290})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bcc0, 0x40007b6390}, {0x369bad8, 0x4000722290}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40004980d8?, {0x369bcc0, 0x40007b6390})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40004980d8, {0x369bcc0, 0x40007b6390})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bcc0, 0x40007b6390}, {0x369bb58, 0x40004980d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4000cd4e00?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5432
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4798 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0x40006f8730)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001948fc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001948fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001948fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001948fc0, 0x4001b0a300)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4727
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4800 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0x40006f8730)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001500540)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001500540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001500540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001500540, 0x4001b0a480)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4727
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4799 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0x40006f8730)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015001c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015001c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015001c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015001c0, 0x4001b0a400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4727
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4971 [chan receive, 22 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001ee2c00, 0x4000082310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4966
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                    

Test pass (224/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.69
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 4.67
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.42
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.57
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 148.12
40 TestAddons/serial/GCPAuth/Namespaces 0.21
41 TestAddons/serial/GCPAuth/FakeCredentials 10.91
57 TestAddons/StoppedEnableDisable 12.6
58 TestCertOptions 33.63
59 TestCertExpiration 264.98
61 TestForceSystemdFlag 40.25
62 TestForceSystemdEnv 46.18
67 TestErrorSpam/setup 29.78
68 TestErrorSpam/start 0.79
69 TestErrorSpam/status 1.07
70 TestErrorSpam/pause 6.78
71 TestErrorSpam/unpause 5.78
72 TestErrorSpam/stop 1.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 81.69
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 27.69
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.63
84 TestFunctional/serial/CacheCmd/cache/add_local 1.64
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.8
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
92 TestFunctional/serial/ExtraConfig 35.91
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.48
95 TestFunctional/serial/LogsFileCmd 1.49
96 TestFunctional/serial/InvalidService 4.08
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 15.14
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.07
107 TestFunctional/parallel/AddonsCmd 0.24
108 TestFunctional/parallel/PersistentVolumeClaim 25.97
110 TestFunctional/parallel/SSHCmd 0.72
111 TestFunctional/parallel/CpCmd 2.38
113 TestFunctional/parallel/FileSync 0.38
114 TestFunctional/parallel/CertSync 2.19
118 TestFunctional/parallel/NodeLabels 0.15
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
122 TestFunctional/parallel/License 0.31
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
136 TestFunctional/parallel/ProfileCmd/profile_list 0.42
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
138 TestFunctional/parallel/MountCmd/any-port 7.38
139 TestFunctional/parallel/MountCmd/specific-port 1.18
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
141 TestFunctional/parallel/ServiceCmd/List 1.46
142 TestFunctional/parallel/ServiceCmd/JSONOutput 1.34
146 TestFunctional/parallel/Version/short 0.07
147 TestFunctional/parallel/Version/components 0.8
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
153 TestFunctional/parallel/ImageCommands/Setup 0.6
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.28
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.96
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.82
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.96
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.95
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.76
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.77
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.33
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.84
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.26
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.43
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.71
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.34
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.71
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.52
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 215.23
265 TestMultiControlPlane/serial/DeployApp 6.06
266 TestMultiControlPlane/serial/PingHostFromPods 1.54
267 TestMultiControlPlane/serial/AddWorkerNode 59.87
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
270 TestMultiControlPlane/serial/CopyFile 20.21
271 TestMultiControlPlane/serial/StopSecondaryNode 12.91
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
273 TestMultiControlPlane/serial/RestartSecondaryNode 30.3
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.16
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 125.12
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.75
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.84
278 TestMultiControlPlane/serial/StopCluster 36.08
279 TestMultiControlPlane/serial/RestartCluster 84.85
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
281 TestMultiControlPlane/serial/AddSecondaryNode 80.2
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
287 TestJSONOutput/start/Command 77.48
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.81
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.26
312 TestKicCustomNetwork/create_custom_network 36.47
313 TestKicCustomNetwork/use_default_bridge_network 34.14
314 TestKicExistingNetwork 35.59
315 TestKicCustomSubnet 37
316 TestKicStaticIP 37.44
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 71.6
321 TestMountStart/serial/StartWithMountFirst 8.69
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 9.05
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.7
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 8.31
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 137.38
333 TestMultiNode/serial/DeployApp2Nodes 6.89
334 TestMultiNode/serial/PingHostFrom2Pods 0.92
335 TestMultiNode/serial/AddNode 56.2
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.76
338 TestMultiNode/serial/CopyFile 10.49
339 TestMultiNode/serial/StopNode 2.38
340 TestMultiNode/serial/StartAfterStop 8.7
341 TestMultiNode/serial/RestartKeepsNodes 73.4
342 TestMultiNode/serial/DeleteNode 5.88
343 TestMultiNode/serial/StopMultiNode 24.07
344 TestMultiNode/serial/RestartMultiNode 48.46
345 TestMultiNode/serial/ValidateNameConflict 37.48
350 TestPreload 122.78
352 TestScheduledStopUnix 110.09
355 TestInsufficientStorage 13.28
356 TestRunningBinaryUpgrade 299.04
359 TestMissingContainerUpgrade 123.44
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 47.37
363 TestNoKubernetes/serial/StartWithStopK8s 7.55
364 TestNoKubernetes/serial/Start 9.63
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
367 TestNoKubernetes/serial/ProfileList 2.97
368 TestNoKubernetes/serial/Stop 1.3
369 TestNoKubernetes/serial/StartNoArgs 7.07
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
371 TestStoppedBinaryUpgrade/Setup 1.37
372 TestStoppedBinaryUpgrade/Upgrade 303.51
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.81
382 TestPause/serial/Start 78.68
383 TestPause/serial/SecondStartNoReconfiguration 29.55
x
+
TestDownloadOnly/v1.28.0/json-events (5.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-199160 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-199160 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.691577668s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1205 06:11:05.666210  444147 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1205 06:11:05.666321  444147 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-199160
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-199160: exit status 85 (96.289459ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-199160 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-199160 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:11:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:11:00.041587  444152 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:11:00.042555  444152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:00.042613  444152 out.go:374] Setting ErrFile to fd 2...
	I1205 06:11:00.042637  444152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:00.043178  444152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	W1205 06:11:00.044325  444152 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-441321/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-441321/.minikube/config/config.json: no such file or directory
	I1205 06:11:00.044961  444152 out.go:368] Setting JSON to true
	I1205 06:11:00.045956  444152 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10387,"bootTime":1764904673,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:11:00.046090  444152 start.go:143] virtualization:  
	I1205 06:11:00.054973  444152 out.go:99] [download-only-199160] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1205 06:11:00.055250  444152 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 06:11:00.055329  444152 notify.go:221] Checking for updates...
	I1205 06:11:00.065479  444152 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:11:00.086748  444152 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:11:00.093445  444152 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:11:00.096880  444152 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:11:00.100416  444152 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 06:11:00.126629  444152 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:11:00.127007  444152 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:11:00.204348  444152 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:11:00.204531  444152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:00.338770  444152 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-05 06:11:00.322299484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:00.338889  444152 docker.go:319] overlay module found
	I1205 06:11:00.344588  444152 out.go:99] Using the docker driver based on user configuration
	I1205 06:11:00.344649  444152 start.go:309] selected driver: docker
	I1205 06:11:00.344657  444152 start.go:927] validating driver "docker" against <nil>
	I1205 06:11:00.344791  444152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:00.452067  444152 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-05 06:11:00.44116418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:00.452254  444152 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:11:00.452578  444152 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1205 06:11:00.452746  444152 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:11:00.460874  444152 out.go:171] Using Docker driver with root privileges
	I1205 06:11:00.464138  444152 cni.go:84] Creating CNI manager for ""
	I1205 06:11:00.464234  444152 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 06:11:00.464256  444152 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 06:11:00.464353  444152 start.go:353] cluster config:
	{Name:download-only-199160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-199160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:11:00.467575  444152 out.go:99] Starting "download-only-199160" primary control-plane node in "download-only-199160" cluster
	I1205 06:11:00.467610  444152 cache.go:134] Beginning downloading kic base image for docker with crio
	I1205 06:11:00.470694  444152 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1205 06:11:00.470773  444152 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 06:11:00.470865  444152 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1205 06:11:00.496995  444152 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1205 06:11:00.497019  444152 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:11:00.497200  444152 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1205 06:11:00.497302  444152 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1205 06:11:00.523110  444152 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1205 06:11:00.523136  444152 cache.go:65] Caching tarball of preloaded images
	I1205 06:11:00.523310  444152 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 06:11:00.526780  444152 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1205 06:11:00.526815  444152 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1205 06:11:00.621486  444152 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1205 06:11:00.621646  444152 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1205 06:11:04.415533  444152 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1205 06:11:04.415915  444152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/download-only-199160/config.json ...
	I1205 06:11:04.415952  444152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/download-only-199160/config.json: {Name:mk8cfd665b1e9f0456da5180f863aaa31f2440c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 06:11:04.416145  444152 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1205 06:11:04.416323  444152 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-199160 host does not exist
	  To start a cluster, run: "minikube start -p download-only-199160"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-199160
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-820804 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-820804 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.674337056s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1205 06:11:10.805513  444147 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1205 06:11:10.805547  444147 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-820804
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-820804: exit status 85 (420.950936ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-199160 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-199160 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-199160                                                                                                                                                   │ download-only-199160 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ -o=json --download-only -p download-only-820804 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-820804 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:11:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:11:06.175965  444357 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:11:06.176163  444357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:06.176191  444357 out.go:374] Setting ErrFile to fd 2...
	I1205 06:11:06.176212  444357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:06.176485  444357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:11:06.176935  444357 out.go:368] Setting JSON to true
	I1205 06:11:06.177790  444357 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10394,"bootTime":1764904673,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:11:06.177886  444357 start.go:143] virtualization:  
	I1205 06:11:06.181333  444357 out.go:99] [download-only-820804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:11:06.181549  444357 notify.go:221] Checking for updates...
	I1205 06:11:06.184385  444357 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:11:06.187521  444357 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:11:06.190535  444357 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:11:06.193445  444357 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:11:06.196606  444357 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 06:11:06.202506  444357 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:11:06.202784  444357 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:11:06.236297  444357 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:11:06.236426  444357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:06.293350  444357 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-05 06:11:06.284175957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:06.293459  444357 docker.go:319] overlay module found
	I1205 06:11:06.296686  444357 out.go:99] Using the docker driver based on user configuration
	I1205 06:11:06.296728  444357 start.go:309] selected driver: docker
	I1205 06:11:06.296735  444357 start.go:927] validating driver "docker" against <nil>
	I1205 06:11:06.296840  444357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:06.352602  444357 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-05 06:11:06.343188903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:06.352749  444357 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:11:06.353034  444357 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1205 06:11:06.353189  444357 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:11:06.356511  444357 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-820804 host does not exist
	  To start a cluster, run: "minikube start -p download-only-820804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-820804
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-120801 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-120801 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.56527811s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-120801
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-120801: exit status 85 (89.209021ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-199160 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-199160 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-199160                                                                                                                                                          │ download-only-199160 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ -o=json --download-only -p download-only-820804 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-820804 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ delete  │ -p download-only-820804                                                                                                                                                          │ download-only-820804 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │ 05 Dec 25 06:11 UTC │
	│ start   │ -o=json --download-only -p download-only-120801 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-120801 │ jenkins │ v1.37.0 │ 05 Dec 25 06:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/05 06:11:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 06:11:11.618857  444559 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:11:11.619019  444559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:11.619030  444559 out.go:374] Setting ErrFile to fd 2...
	I1205 06:11:11.619035  444559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:11:11.619278  444559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:11:11.619669  444559 out.go:368] Setting JSON to true
	I1205 06:11:11.620443  444559 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10399,"bootTime":1764904673,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:11:11.620509  444559 start.go:143] virtualization:  
	I1205 06:11:11.624037  444559 out.go:99] [download-only-120801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:11:11.624315  444559 notify.go:221] Checking for updates...
	I1205 06:11:11.627422  444559 out.go:171] MINIKUBE_LOCATION=21997
	I1205 06:11:11.630974  444559 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:11:11.634204  444559 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:11:11.637291  444559 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:11:11.640449  444559 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 06:11:11.646440  444559 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 06:11:11.646767  444559 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:11:11.672733  444559 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:11:11.672846  444559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:11.741196  444559 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:11:11.731085024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:11.741308  444559 docker.go:319] overlay module found
	I1205 06:11:11.744439  444559 out.go:99] Using the docker driver based on user configuration
	I1205 06:11:11.744488  444559 start.go:309] selected driver: docker
	I1205 06:11:11.744496  444559 start.go:927] validating driver "docker" against <nil>
	I1205 06:11:11.744604  444559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:11:11.801585  444559 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-05 06:11:11.791450738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:11:11.801746  444559 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1205 06:11:11.802036  444559 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1205 06:11:11.802188  444559 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 06:11:11.805432  444559 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-120801 host does not exist
	  To start a cluster, run: "minikube start -p download-only-120801"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-120801
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 06:11:15.626999  444147 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-589348 --alsologtostderr --binary-mirror http://127.0.0.1:36515 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-589348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-589348
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-640282
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-640282: exit status 85 (71.576003ms)

                                                
                                                
-- stdout --
	* Profile "addons-640282" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-640282"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-640282
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-640282: exit status 85 (76.181372ms)

                                                
                                                
-- stdout --
	* Profile "addons-640282" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-640282"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (148.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-640282 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-640282 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m28.11985837s)
--- PASS: TestAddons/Setup (148.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-640282 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-640282 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-640282 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-640282 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4c88078d-a560-4a11-ba24-8bb270c20468] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4c88078d-a560-4a11-ba24-8bb270c20468] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003466295s
addons_test.go:694: (dbg) Run:  kubectl --context addons-640282 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-640282 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-640282 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-640282 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-640282
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-640282: (12.148947132s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-640282
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-640282
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-640282
--- PASS: TestAddons/StoppedEnableDisable (12.60s)

                                                
                                    
x
+
TestCertOptions (33.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-350114 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-350114 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.773741969s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-350114 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-350114 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-350114 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-350114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-350114
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-350114: (2.100502799s)
--- PASS: TestCertOptions (33.63s)

                                                
                                    
x
+
TestCertExpiration (264.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-298198 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-298198 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.172993425s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-298198 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-298198 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (42.349422855s)
helpers_test.go:175: Cleaning up "cert-expiration-298198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-298198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-298198: (2.459325577s)
--- PASS: TestCertExpiration (264.98s)

                                                
                                    
x
+
TestForceSystemdFlag (40.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-059215 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-059215 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.12475557s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-059215 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-059215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-059215
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-059215: (2.77814294s)
--- PASS: TestForceSystemdFlag (40.25s)

                                                
                                    
x
+
TestForceSystemdEnv (46.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-530100 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-530100 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.304822824s)
helpers_test.go:175: Cleaning up "force-systemd-env-530100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-530100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-530100: (2.87775498s)
--- PASS: TestForceSystemdEnv (46.18s)

                                                
                                    
x
+
TestErrorSpam/setup (29.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-714372 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-714372 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-714372 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-714372 --driver=docker  --container-runtime=crio: (29.78061209s)
--- PASS: TestErrorSpam/setup (29.78s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (6.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause: exit status 80 (2.503231224s)

                                                
                                                
-- stdout --
	* Pausing node nospam-714372 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:17:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause: exit status 80 (2.102171999s)

                                                
                                                
-- stdout --
	* Pausing node nospam-714372 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:17:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause: exit status 80 (2.178072157s)

                                                
                                                
-- stdout --
	* Pausing node nospam-714372 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause: exit status 80 (2.208162439s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-714372 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:17:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause: exit status 80 (1.957731707s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-714372 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:17:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause: exit status 80 (1.611479456s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-714372 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-05T06:17:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 stop: (1.309915627s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-714372 --log_dir /tmp/nospam-714372 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252233 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1205 06:18:45.393963  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:45.400460  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:45.411931  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:45.433379  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:45.474875  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:45.556297  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:45.717851  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:46.039495  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:46.681124  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:47.963274  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:50.525364  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:55.647399  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:05.889013  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-252233 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.694550924s)
--- PASS: TestFunctional/serial/StartWithProxy (81.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 06:19:18.568211  444147 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252233 --alsologtostderr -v=8
E1205 06:19:26.371374  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-252233 --alsologtostderr -v=8: (27.68600324s)
functional_test.go:678: soft start took 27.686514967s for "functional-252233" cluster.
I1205 06:19:46.254496  444147 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (27.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-252233 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 cache add registry.k8s.io/pause:3.1: (1.244533509s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 cache add registry.k8s.io/pause:3.3: (1.251766883s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 cache add registry.k8s.io/pause:latest: (1.130605107s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-252233 /tmp/TestFunctionalserialCacheCmdcacheadd_local2751876625/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cache add minikube-local-cache-test:functional-252233
functional_test.go:1104: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 cache add minikube-local-cache-test:functional-252233: (1.149582288s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cache delete minikube-local-cache-test:functional-252233
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-252233
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.542556ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 kubectl -- --context functional-252233 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-252233 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252233 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 06:20:07.333259  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-252233 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.906151541s)
functional_test.go:776: restart took 35.906256839s for "functional-252233" cluster.
I1205 06:20:30.222543  444147 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (35.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-252233 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 logs: (1.479600182s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 logs --file /tmp/TestFunctionalserialLogsFileCmd3458315884/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 logs --file /tmp/TestFunctionalserialLogsFileCmd3458315884/001/logs.txt: (1.489966378s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-252233 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-252233
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-252233: exit status 115 (389.027409ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31643 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-252233 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 config get cpus: exit status 14 (81.07919ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 config get cpus: exit status 14 (68.493643ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-252233 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-252233 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 470494: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-252233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.481322ms)

                                                
                                                
-- stdout --
	* [functional-252233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:31:04.706636  470159 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:31:04.706807  470159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:04.706829  470159 out.go:374] Setting ErrFile to fd 2...
	I1205 06:31:04.706848  470159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:04.707420  470159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:31:04.707945  470159 out.go:368] Setting JSON to false
	I1205 06:31:04.709100  470159 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11592,"bootTime":1764904673,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:31:04.709217  470159 start.go:143] virtualization:  
	I1205 06:31:04.712276  470159 out.go:179] * [functional-252233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 06:31:04.715934  470159 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:31:04.716017  470159 notify.go:221] Checking for updates...
	I1205 06:31:04.721888  470159 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:31:04.724871  470159 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:31:04.727732  470159 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:31:04.730573  470159 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:31:04.733403  470159 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:31:04.736745  470159 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:31:04.737339  470159 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:31:04.774076  470159 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:31:04.774201  470159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:31:04.834036  470159 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:31:04.824155809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:31:04.834148  470159 docker.go:319] overlay module found
	I1205 06:31:04.837276  470159 out.go:179] * Using the docker driver based on existing profile
	I1205 06:31:04.840141  470159 start.go:309] selected driver: docker
	I1205 06:31:04.840162  470159 start.go:927] validating driver "docker" against &{Name:functional-252233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-252233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:31:04.840300  470159 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:31:04.843891  470159 out.go:203] 
	W1205 06:31:04.846767  470159 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 06:31:04.849715  470159 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252233 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-252233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-252233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (206.651559ms)

                                                
                                                
-- stdout --
	* [functional-252233] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 06:31:04.508420  470112 out.go:360] Setting OutFile to fd 1 ...
	I1205 06:31:04.508641  470112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:04.508667  470112 out.go:374] Setting ErrFile to fd 2...
	I1205 06:31:04.508686  470112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 06:31:04.509128  470112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 06:31:04.509557  470112 out.go:368] Setting JSON to false
	I1205 06:31:04.510576  470112 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11592,"bootTime":1764904673,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 06:31:04.510702  470112 start.go:143] virtualization:  
	I1205 06:31:04.514635  470112 out.go:179] * [functional-252233] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1205 06:31:04.517791  470112 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 06:31:04.517895  470112 notify.go:221] Checking for updates...
	I1205 06:31:04.523904  470112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 06:31:04.526870  470112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 06:31:04.529772  470112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 06:31:04.532789  470112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 06:31:04.535611  470112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 06:31:04.539058  470112 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 06:31:04.539744  470112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 06:31:04.570830  470112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 06:31:04.570960  470112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 06:31:04.630207  470112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 06:31:04.62055657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 06:31:04.630319  470112 docker.go:319] overlay module found
	I1205 06:31:04.633523  470112 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 06:31:04.636723  470112 start.go:309] selected driver: docker
	I1205 06:31:04.636749  470112 start.go:927] validating driver "docker" against &{Name:functional-252233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-252233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 06:31:04.636867  470112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 06:31:04.640516  470112 out.go:203] 
	W1205 06:31:04.643674  470112 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 06:31:04.646508  470112 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5a7152d4-54c8-4e57-b889-d123abc92eed] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003656064s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-252233 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-252233 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-252233 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-252233 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [734568d2-2e13-4721-9f91-4f6ec6354f89] Pending
helpers_test.go:352: "sp-pod" [734568d2-2e13-4721-9f91-4f6ec6354f89] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [734568d2-2e13-4721-9f91-4f6ec6354f89] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00387436s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-252233 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-252233 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-252233 delete -f testdata/storage-provisioner/pod.yaml: (1.011195375s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-252233 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e2abbcd0-9baf-491f-a085-0575593a68d1] Pending
helpers_test.go:352: "sp-pod" [e2abbcd0-9baf-491f-a085-0575593a68d1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00374547s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-252233 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh -n functional-252233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cp functional-252233:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2956756363/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh -n functional-252233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh -n functional-252233 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/444147/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo cat /etc/test/nested/copy/444147/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/444147.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo cat /etc/ssl/certs/444147.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/444147.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo cat /usr/share/ca-certificates/444147.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4441472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo cat /etc/ssl/certs/4441472.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4441472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo cat /usr/share/ca-certificates/4441472.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-252233 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 ssh "sudo systemctl is-active docker": exit status 1 (277.374608ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 ssh "sudo systemctl is-active containerd": exit status 1 (277.723051ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-252233 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-252233 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-252233 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 466849: os: process already finished
helpers_test.go:519: unable to terminate pid 466624: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-252233 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-252233 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-252233 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9ede654d-fd91-4b2b-ae70-676b850870af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9ede654d-fd91-4b2b-ae70-676b850870af] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004360158s
I1205 06:20:47.704002  444147 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-252233 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.254.53 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-252233 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "363.127118ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "53.322752ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "374.143433ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.140591ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdany-port3624560449/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764916253083119094" to /tmp/TestFunctionalparallelMountCmdany-port3624560449/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764916253083119094" to /tmp/TestFunctionalparallelMountCmdany-port3624560449/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764916253083119094" to /tmp/TestFunctionalparallelMountCmdany-port3624560449/001/test-1764916253083119094
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.76768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:30:53.446174  444147 retry.go:31] will retry after 731.600049ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 06:30 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 06:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 06:30 test-1764916253083119094
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh cat /mount-9p/test-1764916253083119094
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-252233 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [26c80250-f63e-422b-a7b1-524e769b96b8] Pending
helpers_test.go:352: "busybox-mount" [26c80250-f63e-422b-a7b1-524e769b96b8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [26c80250-f63e-422b-a7b1-524e769b96b8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [26c80250-f63e-422b-a7b1-524e769b96b8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003930435s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-252233 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdany-port3624560449/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdspecific-port1730393155/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdspecific-port1730393155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 ssh "sudo umount -f /mount-9p": exit status 1 (286.66152ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-252233 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdspecific-port1730393155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868166128/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868166128/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868166128/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T" /mount1: exit status 1 (520.822257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 06:31:02.171627  444147 retry.go:31] will retry after 273.504582ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-252233 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868166128/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868166128/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-252233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2868166128/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 service list: (1.45769619s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 service list -o json: (1.340457475s)
functional_test.go:1504: Took "1.340528113s" to run "out/minikube-linux-arm64 -p functional-252233 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252233 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252233 image ls --format short --alsologtostderr:
I1205 06:31:22.488769  472786 out.go:360] Setting OutFile to fd 1 ...
I1205 06:31:22.489122  472786 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:22.489186  472786 out.go:374] Setting ErrFile to fd 2...
I1205 06:31:22.489208  472786 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:22.489819  472786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 06:31:22.490612  472786 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:22.490806  472786 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:22.491467  472786 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
I1205 06:31:22.521232  472786 ssh_runner.go:195] Run: systemctl --version
I1205 06:31:22.521356  472786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
I1205 06:31:22.542615  472786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
I1205 06:31:22.653705  472786 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252233 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252233 image ls --format table --alsologtostderr:
I1205 06:31:23.472517  473076 out.go:360] Setting OutFile to fd 1 ...
I1205 06:31:23.472666  473076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:23.472691  473076 out.go:374] Setting ErrFile to fd 2...
I1205 06:31:23.472709  473076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:23.473003  473076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 06:31:23.473704  473076 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:23.473870  473076 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:23.474470  473076 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
I1205 06:31:23.510026  473076 ssh_runner.go:195] Run: systemctl --version
I1205 06:31:23.510082  473076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
I1205 06:31:23.532822  473076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
I1205 06:31:23.641339  473076 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252233 image ls --format json --alsologtostderr:
[{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTa
gs":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5a
db33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserve
r@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io
/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c
9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252233 image ls --format json --alsologtostderr:
I1205 06:31:23.211987  472996 out.go:360] Setting OutFile to fd 1 ...
I1205 06:31:23.212214  472996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:23.212241  472996 out.go:374] Setting ErrFile to fd 2...
I1205 06:31:23.212262  472996 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:23.212576  472996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 06:31:23.213314  472996 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:23.213511  472996 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:23.214073  472996 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
I1205 06:31:23.239684  472996 ssh_runner.go:195] Run: systemctl --version
I1205 06:31:23.239744  472996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
I1205 06:31:23.263207  472996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
I1205 06:31:23.369995  472996 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252233 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252233 image ls --format yaml --alsologtostderr:
I1205 06:31:22.935288  472923 out.go:360] Setting OutFile to fd 1 ...
I1205 06:31:22.935496  472923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:22.935508  472923 out.go:374] Setting ErrFile to fd 2...
I1205 06:31:22.935514  472923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:22.935861  472923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 06:31:22.936669  472923 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:22.936851  472923 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:22.937560  472923 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
I1205 06:31:22.955953  472923 ssh_runner.go:195] Run: systemctl --version
I1205 06:31:22.956011  472923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
I1205 06:31:22.977194  472923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
I1205 06:31:23.088105  472923 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-252233 ssh pgrep buildkitd: exit status 1 (363.411214ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr: (3.463878931s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> de15661ff50
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-252233
--> 7e7c8ba6514
Successfully tagged localhost/my-image:functional-252233
7e7c8ba65143c0643d93d39cd6aea57b026b6bc3ca1463671ac7ef4fa49dc5b7
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-252233 image build -t localhost/my-image:functional-252233 testdata/build --alsologtostderr:
I1205 06:31:23.141921  472982 out.go:360] Setting OutFile to fd 1 ...
I1205 06:31:23.142728  472982 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:23.142744  472982 out.go:374] Setting ErrFile to fd 2...
I1205 06:31:23.142750  472982 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:31:23.143110  472982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 06:31:23.143821  472982 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:23.144526  472982 config.go:182] Loaded profile config "functional-252233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1205 06:31:23.145096  472982 cli_runner.go:164] Run: docker container inspect functional-252233 --format={{.State.Status}}
I1205 06:31:23.170459  472982 ssh_runner.go:195] Run: systemctl --version
I1205 06:31:23.170521  472982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-252233
I1205 06:31:23.199167  472982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-252233/id_rsa Username:docker}
I1205 06:31:23.309459  472982 build_images.go:162] Building image from path: /tmp/build.2394402792.tar
I1205 06:31:23.309552  472982 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 06:31:23.319991  472982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2394402792.tar
I1205 06:31:23.324768  472982 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2394402792.tar: stat -c "%s %y" /var/lib/minikube/build/build.2394402792.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2394402792.tar': No such file or directory
I1205 06:31:23.324798  472982 ssh_runner.go:362] scp /tmp/build.2394402792.tar --> /var/lib/minikube/build/build.2394402792.tar (3072 bytes)
I1205 06:31:23.344466  472982 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2394402792
I1205 06:31:23.353016  472982 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2394402792 -xf /var/lib/minikube/build/build.2394402792.tar
I1205 06:31:23.361735  472982 crio.go:315] Building image: /var/lib/minikube/build/build.2394402792
I1205 06:31:23.361823  472982 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-252233 /var/lib/minikube/build/build.2394402792 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1205 06:31:26.507596  472982 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-252233 /var/lib/minikube/build/build.2394402792 --cgroup-manager=cgroupfs: (3.145744792s)
I1205 06:31:26.507661  472982 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2394402792
I1205 06:31:26.515963  472982 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2394402792.tar
I1205 06:31:26.523962  472982 build_images.go:218] Built localhost/my-image:functional-252233 from /tmp/build.2394402792.tar
I1205 06:31:26.523995  472982 build_images.go:134] succeeded building to: functional-252233
I1205 06:31:26.524000  472982 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-252233
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image rm kicbase/echo-server:functional-252233 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 image ls
2025/12/05 06:31:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-252233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-252233
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-252233
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-252233
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-441321/.minikube/files/etc/test/nested/copy/444147/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-787602 cache add registry.k8s.io/pause:3.1: (1.123355905s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-787602 cache add registry.k8s.io/pause:3.3: (1.091124729s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-787602 cache add registry.k8s.io/pause:latest: (1.0619999s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3542313661/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cache add minikube-local-cache-test:functional-787602
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cache delete minikube-local-cache-test:functional-787602
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-787602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.383522ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1723190262/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 config get cpus: exit status 14 (69.259382ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 config get cpus: exit status 14 (64.562509ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (194.297515ms)

                                                
                                                
-- stdout --
	* [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:00:49.502343  502938 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:49.502502  502938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.502513  502938 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:49.502519  502938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.502784  502938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:00:49.503154  502938 out.go:368] Setting JSON to false
	I1205 07:00:49.503986  502938 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13377,"bootTime":1764904673,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:00:49.504060  502938 start.go:143] virtualization:  
	I1205 07:00:49.507496  502938 out.go:179] * [functional-787602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1205 07:00:49.510290  502938 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:00:49.510353  502938 notify.go:221] Checking for updates...
	I1205 07:00:49.516286  502938 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:00:49.519283  502938 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:00:49.522613  502938 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:00:49.525571  502938 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:00:49.528361  502938 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:00:49.531767  502938 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:00:49.532408  502938 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:00:49.568231  502938 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:00:49.568378  502938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.627969  502938 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.618607395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.628073  502938 docker.go:319] overlay module found
	I1205 07:00:49.631064  502938 out.go:179] * Using the docker driver based on existing profile
	I1205 07:00:49.633887  502938 start.go:309] selected driver: docker
	I1205 07:00:49.633910  502938 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.634045  502938 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:00:49.637715  502938 out.go:203] 
	W1205 07:00:49.640541  502938 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 07:00:49.643447  502938 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787602 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (199.49823ms)

                                                
                                                
-- stdout --
	* [functional-787602] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:00:49.313596  502886 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:00:49.313823  502886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.313847  502886 out.go:374] Setting ErrFile to fd 2...
	I1205 07:00:49.313868  502886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:00:49.314263  502886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:00:49.314717  502886 out.go:368] Setting JSON to false
	I1205 07:00:49.315582  502886 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13377,"bootTime":1764904673,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1205 07:00:49.315673  502886 start.go:143] virtualization:  
	I1205 07:00:49.319595  502886 out.go:179] * [functional-787602] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1205 07:00:49.323383  502886 out.go:179]   - MINIKUBE_LOCATION=21997
	I1205 07:00:49.323463  502886 notify.go:221] Checking for updates...
	I1205 07:00:49.329199  502886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 07:00:49.332121  502886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	I1205 07:00:49.335037  502886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	I1205 07:00:49.337883  502886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 07:00:49.340680  502886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 07:00:49.344014  502886 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1205 07:00:49.344577  502886 driver.go:422] Setting default libvirt URI to qemu:///system
	I1205 07:00:49.371916  502886 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1205 07:00:49.372060  502886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:00:49.433153  502886 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-05 07:00:49.423516983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:00:49.433259  502886 docker.go:319] overlay module found
	I1205 07:00:49.436337  502886 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1205 07:00:49.439286  502886 start.go:309] selected driver: docker
	I1205 07:00:49.439314  502886 start.go:927] validating driver "docker" against &{Name:functional-787602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-787602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 07:00:49.439421  502886 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 07:00:49.443018  502886 out.go:203] 
	W1205 07:00:49.445928  502886 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 07:00:49.448790  502886 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh -n functional-787602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cp functional-787602:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1400708700/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh -n functional-787602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh -n functional-787602 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/444147/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo cat /etc/test/nested/copy/444147/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/444147.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo cat /etc/ssl/certs/444147.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/444147.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo cat /usr/share/ca-certificates/444147.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4441472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo cat /etc/ssl/certs/4441472.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4441472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo cat /usr/share/ca-certificates/4441472.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh "sudo systemctl is-active docker": exit status 1 (274.361283ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh "sudo systemctl is-active containerd": exit status 1 (296.295709ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-787602 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "372.239097ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.947313ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "356.878803ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.397065ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo535009720/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.243198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 07:00:43.448557  444147 retry.go:31] will retry after 304.476142ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo535009720/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh "sudo umount -f /mount-9p": exit status 1 (275.369181ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-787602 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo535009720/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-787602 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2531362716/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787602 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787602 image ls --format short --alsologtostderr:
I1205 07:01:03.044620  505452 out.go:360] Setting OutFile to fd 1 ...
I1205 07:01:03.044807  505452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:03.044836  505452 out.go:374] Setting ErrFile to fd 2...
I1205 07:01:03.044857  505452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:03.045127  505452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 07:01:03.045790  505452 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:03.045983  505452 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:03.046593  505452 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 07:01:03.068720  505452 ssh_runner.go:195] Run: systemctl --version
I1205 07:01:03.068789  505452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 07:01:03.085639  505452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
I1205 07:01:03.188560  505452 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787602 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest            │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 66749159455b3 │ 29MB   │
│ localhost/my-image                      │ functional-787602 │ 1cb5ae707718e │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ ccd634d9bcc36 │ 84.9MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.1               │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1            │ d7b100cd9a77b │ 517kB  │
│ registry.k8s.io/pause                   │ 3.3               │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest            │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787602 image ls --format table --alsologtostderr:
I1205 07:01:07.440355  505935 out.go:360] Setting OutFile to fd 1 ...
I1205 07:01:07.440476  505935 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:07.440487  505935 out.go:374] Setting ErrFile to fd 2...
I1205 07:01:07.440491  505935 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:07.440758  505935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 07:01:07.441435  505935 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:07.441563  505935 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:07.442128  505935 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 07:01:07.459495  505935 ssh_runner.go:195] Run: systemctl --version
I1205 07:01:07.459554  505935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 07:01:07.479036  505935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
I1205 07:01:07.581067  505935 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787602 image ls --format json --alsologtostderr:
[{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74488375"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60854229"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84947242"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"],"repoTags":["registry.
k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72167568"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74105124"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"517328"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["regi
stry.k8s.io/pause:latest"],"size":"246070"},{"id":"091774799ac68d856ad95d14ee0b711d7df9b79ceedcb14dc6da365ff707a01f","repoDigests":["docker.io/library/363da83a9781cbeadb95701cfd85c4633b0bbd52094c63ef6791a3f897169d4b-tmp@sha256:a3c9dd0c4886f05bdbbe3edc87768dc5a7efdf87a71875e6d4374e65eb58175f"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29035622"},{"id":"1cb5ae707718eccae607eb89c0870ac2c26e48c745af9dfc38
252a84f04de563","repoDigests":["localhost/my-image@sha256:ecf25bb5c0ba48e660353aae439d42a05d369e4b8e3bada15efdc0438398cb8a"],"repoTags":["localhost/my-image:functional-787602"],"size":"1640789"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49819792"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787602 image ls --format json --alsologtostderr:
I1205 07:01:07.202030  505895 out.go:360] Setting OutFile to fd 1 ...
I1205 07:01:07.202248  505895 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:07.202277  505895 out.go:374] Setting ErrFile to fd 2...
I1205 07:01:07.202297  505895 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:07.202665  505895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 07:01:07.205002  505895 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:07.205228  505895 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:07.205839  505895 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 07:01:07.225860  505895 ssh_runner.go:195] Run: systemctl --version
I1205 07:01:07.225909  505895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 07:01:07.249432  505895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
I1205 07:01:07.352795  505895 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787602 image ls --format yaml --alsologtostderr:
- id: 66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29035622"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60854229"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72167568"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49819792"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74488375"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84947242"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74105124"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde
repoTags:
- registry.k8s.io/pause:3.10.1
size: "517328"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787602 image ls --format yaml --alsologtostderr:
I1205 07:01:03.271303  505488 out.go:360] Setting OutFile to fd 1 ...
I1205 07:01:03.271440  505488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:03.271452  505488 out.go:374] Setting ErrFile to fd 2...
I1205 07:01:03.271481  505488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:03.271759  505488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 07:01:03.272446  505488 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:03.272610  505488 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:03.273165  505488 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 07:01:03.290828  505488 ssh_runner.go:195] Run: systemctl --version
I1205 07:01:03.290878  505488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 07:01:03.308500  505488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
I1205 07:01:03.409201  505488 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787602 ssh pgrep buildkitd: exit status 1 (306.007275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image build -t localhost/my-image:functional-787602 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-787602 image build -t localhost/my-image:functional-787602 testdata/build --alsologtostderr: (3.163260936s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787602 image build -t localhost/my-image:functional-787602 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 091774799ac
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-787602
--> 1cb5ae70771
Successfully tagged localhost/my-image:functional-787602
1cb5ae707718eccae607eb89c0870ac2c26e48c745af9dfc38252a84f04de563
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787602 image build -t localhost/my-image:functional-787602 testdata/build --alsologtostderr:
I1205 07:01:03.795715  505590 out.go:360] Setting OutFile to fd 1 ...
I1205 07:01:03.795893  505590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:03.795923  505590 out.go:374] Setting ErrFile to fd 2...
I1205 07:01:03.795943  505590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 07:01:03.796197  505590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
I1205 07:01:03.796896  505590 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:03.797610  505590 config.go:182] Loaded profile config "functional-787602": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1205 07:01:03.798218  505590 cli_runner.go:164] Run: docker container inspect functional-787602 --format={{.State.Status}}
I1205 07:01:03.815566  505590 ssh_runner.go:195] Run: systemctl --version
I1205 07:01:03.815623  505590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787602
I1205 07:01:03.832661  505590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/functional-787602/id_rsa Username:docker}
I1205 07:01:03.937159  505590 build_images.go:162] Building image from path: /tmp/build.2030690996.tar
I1205 07:01:03.937236  505590 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 07:01:03.944948  505590 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2030690996.tar
I1205 07:01:03.948471  505590 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2030690996.tar: stat -c "%s %y" /var/lib/minikube/build/build.2030690996.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2030690996.tar': No such file or directory
I1205 07:01:03.948500  505590 ssh_runner.go:362] scp /tmp/build.2030690996.tar --> /var/lib/minikube/build/build.2030690996.tar (3072 bytes)
I1205 07:01:03.965757  505590 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2030690996
I1205 07:01:03.973721  505590 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2030690996 -xf /var/lib/minikube/build/build.2030690996.tar
I1205 07:01:03.981440  505590 crio.go:315] Building image: /var/lib/minikube/build/build.2030690996
I1205 07:01:03.981519  505590 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-787602 /var/lib/minikube/build/build.2030690996 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1205 07:01:06.883843  505590 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-787602 /var/lib/minikube/build/build.2030690996 --cgroup-manager=cgroupfs: (2.902295672s)
I1205 07:01:06.883920  505590 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2030690996
I1205 07:01:06.892123  505590 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2030690996.tar
I1205 07:01:06.900174  505590 build_images.go:218] Built localhost/my-image:functional-787602 from /tmp/build.2030690996.tar
I1205 07:01:06.900207  505590 build_images.go:134] succeeded building to: functional-787602
I1205 07:01:06.900212  505590 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-787602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image rm kicbase/echo-server:functional-787602 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-787602 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-787602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-787602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-787602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1205 07:03:44.278560  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:44.290511  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:44.301840  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:44.323193  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:44.364533  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:44.445903  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:44.607445  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:44.929052  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:45.391638  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:45.571263  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:46.852569  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:49.414909  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:03:54.536209  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:04:04.777867  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:04:25.259422  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:05:06.220827  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:05:39.247946  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m34.324872177s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (215.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 kubectl -- rollout status deployment/busybox: (3.406150734s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-dzxxn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-sczqs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-tdrhz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-dzxxn -- nslookup kubernetes.default
E1205 07:06:28.142988  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-sczqs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-tdrhz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-dzxxn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-sczqs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-tdrhz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-dzxxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-dzxxn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-sczqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-sczqs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-tdrhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 kubectl -- exec busybox-7b57f96db7-tdrhz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 node add --alsologtostderr -v 5: (58.766450509s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5: (1.099837821s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-409775 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.089543995s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 status --output json --alsologtostderr -v 5: (1.017959151s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp testdata/cp-test.txt ha-409775:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3231933100/001/cp-test_ha-409775.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775:/home/docker/cp-test.txt ha-409775-m02:/home/docker/cp-test_ha-409775_ha-409775-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test_ha-409775_ha-409775-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775:/home/docker/cp-test.txt ha-409775-m03:/home/docker/cp-test_ha-409775_ha-409775-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test_ha-409775_ha-409775-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775:/home/docker/cp-test.txt ha-409775-m04:/home/docker/cp-test_ha-409775_ha-409775-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test_ha-409775_ha-409775-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp testdata/cp-test.txt ha-409775-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3231933100/001/cp-test_ha-409775-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m02:/home/docker/cp-test.txt ha-409775:/home/docker/cp-test_ha-409775-m02_ha-409775.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test_ha-409775-m02_ha-409775.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m02:/home/docker/cp-test.txt ha-409775-m03:/home/docker/cp-test_ha-409775-m02_ha-409775-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test_ha-409775-m02_ha-409775-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m02:/home/docker/cp-test.txt ha-409775-m04:/home/docker/cp-test_ha-409775-m02_ha-409775-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test_ha-409775-m02_ha-409775-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp testdata/cp-test.txt ha-409775-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3231933100/001/cp-test_ha-409775-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m03:/home/docker/cp-test.txt ha-409775:/home/docker/cp-test_ha-409775-m03_ha-409775.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test_ha-409775-m03_ha-409775.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m03:/home/docker/cp-test.txt ha-409775-m02:/home/docker/cp-test_ha-409775-m03_ha-409775-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test_ha-409775-m03_ha-409775-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m03:/home/docker/cp-test.txt ha-409775-m04:/home/docker/cp-test_ha-409775-m03_ha-409775-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test_ha-409775-m03_ha-409775-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp testdata/cp-test.txt ha-409775-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3231933100/001/cp-test_ha-409775-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m04:/home/docker/cp-test.txt ha-409775:/home/docker/cp-test_ha-409775-m04_ha-409775.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775 "sudo cat /home/docker/cp-test_ha-409775-m04_ha-409775.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m04:/home/docker/cp-test.txt ha-409775-m02:/home/docker/cp-test_ha-409775-m04_ha-409775-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m02 "sudo cat /home/docker/cp-test_ha-409775-m04_ha-409775-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 cp ha-409775-m04:/home/docker/cp-test.txt ha-409775-m03:/home/docker/cp-test_ha-409775-m04_ha-409775-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 ssh -n ha-409775-m03 "sudo cat /home/docker/cp-test_ha-409775-m04_ha-409775-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 node stop m02 --alsologtostderr -v 5: (12.087494275s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5: exit status 7 (821.093744ms)

                                                
                                                
-- stdout --
	ha-409775
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409775-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409775-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-409775-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:08:04.385230  521691 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:08:04.385353  521691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:08:04.385358  521691 out.go:374] Setting ErrFile to fd 2...
	I1205 07:08:04.385364  521691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:08:04.385634  521691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:08:04.385823  521691 out.go:368] Setting JSON to false
	I1205 07:08:04.385926  521691 notify.go:221] Checking for updates...
	I1205 07:08:04.385863  521691 mustload.go:66] Loading cluster: ha-409775
	I1205 07:08:04.387547  521691 config.go:182] Loaded profile config "ha-409775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:08:04.387574  521691 status.go:174] checking status of ha-409775 ...
	I1205 07:08:04.388202  521691 cli_runner.go:164] Run: docker container inspect ha-409775 --format={{.State.Status}}
	I1205 07:08:04.413005  521691 status.go:371] ha-409775 host status = "Running" (err=<nil>)
	I1205 07:08:04.413028  521691 host.go:66] Checking if "ha-409775" exists ...
	I1205 07:08:04.413336  521691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409775
	I1205 07:08:04.446160  521691 host.go:66] Checking if "ha-409775" exists ...
	I1205 07:08:04.446527  521691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:08:04.446573  521691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409775
	I1205 07:08:04.473187  521691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/ha-409775/id_rsa Username:docker}
	I1205 07:08:04.583951  521691 ssh_runner.go:195] Run: systemctl --version
	I1205 07:08:04.590714  521691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:08:04.605176  521691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:08:04.664488  521691 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-05 07:08:04.654804134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:08:04.665068  521691 kubeconfig.go:125] found "ha-409775" server: "https://192.168.49.254:8443"
	I1205 07:08:04.665106  521691 api_server.go:166] Checking apiserver status ...
	I1205 07:08:04.665155  521691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:08:04.679606  521691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1284/cgroup
	I1205 07:08:04.692180  521691 api_server.go:182] apiserver freezer: "8:freezer:/docker/84b3d2c200558cad339baa8918bfd4cb071f881066db36607199e52b1a4917d9/crio/crio-c40783dace342dc0ac94ff72760973b7d7d60961a94e5ed9302558f51a61847e"
	I1205 07:08:04.692252  521691 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/84b3d2c200558cad339baa8918bfd4cb071f881066db36607199e52b1a4917d9/crio/crio-c40783dace342dc0ac94ff72760973b7d7d60961a94e5ed9302558f51a61847e/freezer.state
	I1205 07:08:04.700752  521691 api_server.go:204] freezer state: "THAWED"
	I1205 07:08:04.700783  521691 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 07:08:04.709046  521691 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 07:08:04.709077  521691 status.go:463] ha-409775 apiserver status = Running (err=<nil>)
	I1205 07:08:04.709088  521691 status.go:176] ha-409775 status: &{Name:ha-409775 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:08:04.709106  521691 status.go:174] checking status of ha-409775-m02 ...
	I1205 07:08:04.709431  521691 cli_runner.go:164] Run: docker container inspect ha-409775-m02 --format={{.State.Status}}
	I1205 07:08:04.738923  521691 status.go:371] ha-409775-m02 host status = "Stopped" (err=<nil>)
	I1205 07:08:04.738948  521691 status.go:384] host is not running, skipping remaining checks
	I1205 07:08:04.738956  521691 status.go:176] ha-409775-m02 status: &{Name:ha-409775-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:08:04.738977  521691 status.go:174] checking status of ha-409775-m03 ...
	I1205 07:08:04.739318  521691 cli_runner.go:164] Run: docker container inspect ha-409775-m03 --format={{.State.Status}}
	I1205 07:08:04.763265  521691 status.go:371] ha-409775-m03 host status = "Running" (err=<nil>)
	I1205 07:08:04.763295  521691 host.go:66] Checking if "ha-409775-m03" exists ...
	I1205 07:08:04.763607  521691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409775-m03
	I1205 07:08:04.780976  521691 host.go:66] Checking if "ha-409775-m03" exists ...
	I1205 07:08:04.781303  521691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:08:04.781360  521691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409775-m03
	I1205 07:08:04.799114  521691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/ha-409775-m03/id_rsa Username:docker}
	I1205 07:08:04.907866  521691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:08:04.929094  521691 kubeconfig.go:125] found "ha-409775" server: "https://192.168.49.254:8443"
	I1205 07:08:04.929123  521691 api_server.go:166] Checking apiserver status ...
	I1205 07:08:04.929166  521691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:08:04.941233  521691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1213/cgroup
	I1205 07:08:04.949698  521691 api_server.go:182] apiserver freezer: "8:freezer:/docker/f0dafdebf969793aa2f1e05ed125ded374260d8d5178e6b89a48e5e633c6adb8/crio/crio-658204a5e69d8b568fa3b5beb21ea918a2e53fbc84f59aa43d4ebc66be424b5f"
	I1205 07:08:04.949772  521691 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f0dafdebf969793aa2f1e05ed125ded374260d8d5178e6b89a48e5e633c6adb8/crio/crio-658204a5e69d8b568fa3b5beb21ea918a2e53fbc84f59aa43d4ebc66be424b5f/freezer.state
	I1205 07:08:04.958266  521691 api_server.go:204] freezer state: "THAWED"
	I1205 07:08:04.958295  521691 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 07:08:04.966391  521691 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 07:08:04.966427  521691 status.go:463] ha-409775-m03 apiserver status = Running (err=<nil>)
	I1205 07:08:04.966442  521691 status.go:176] ha-409775-m03 status: &{Name:ha-409775-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:08:04.966463  521691 status.go:174] checking status of ha-409775-m04 ...
	I1205 07:08:04.966781  521691 cli_runner.go:164] Run: docker container inspect ha-409775-m04 --format={{.State.Status}}
	I1205 07:08:04.984048  521691 status.go:371] ha-409775-m04 host status = "Running" (err=<nil>)
	I1205 07:08:04.984093  521691 host.go:66] Checking if "ha-409775-m04" exists ...
	I1205 07:08:04.984408  521691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-409775-m04
	I1205 07:08:05.001925  521691 host.go:66] Checking if "ha-409775-m04" exists ...
	I1205 07:08:05.002267  521691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:08:05.002318  521691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-409775-m04
	I1205 07:08:05.028343  521691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/ha-409775-m04/id_rsa Username:docker}
	I1205 07:08:05.131646  521691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:08:05.145254  521691 status.go:176] ha-409775-m04 status: &{Name:ha-409775-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 node start m02 --alsologtostderr -v 5
E1205 07:08:28.465940  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 node start m02 --alsologtostderr -v 5: (28.802086847s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5: (1.370469462s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.163628556s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 stop --alsologtostderr -v 5
E1205 07:08:42.318560  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:08:44.276901  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:08:45.392323  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:09:11.984339  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 stop --alsologtostderr -v 5: (37.529451405s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 start --wait true --alsologtostderr -v 5
E1205 07:10:39.247050  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 start --wait true --alsologtostderr -v 5: (1m27.407279571s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 node delete m03 --alsologtostderr -v 5: (10.779020839s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 stop --alsologtostderr -v 5: (35.953624038s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5: exit status 7 (121.414735ms)

                                                
                                                
-- stdout --
	ha-409775
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409775-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-409775-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:11:31.202699  533591 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:11:31.202838  533591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:31.202849  533591 out.go:374] Setting ErrFile to fd 2...
	I1205 07:11:31.202854  533591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:11:31.203109  533591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:11:31.203294  533591 out.go:368] Setting JSON to false
	I1205 07:11:31.203338  533591 mustload.go:66] Loading cluster: ha-409775
	I1205 07:11:31.203411  533591 notify.go:221] Checking for updates...
	I1205 07:11:31.204399  533591 config.go:182] Loaded profile config "ha-409775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:11:31.204425  533591 status.go:174] checking status of ha-409775 ...
	I1205 07:11:31.204923  533591 cli_runner.go:164] Run: docker container inspect ha-409775 --format={{.State.Status}}
	I1205 07:11:31.222947  533591 status.go:371] ha-409775 host status = "Stopped" (err=<nil>)
	I1205 07:11:31.222972  533591 status.go:384] host is not running, skipping remaining checks
	I1205 07:11:31.222979  533591 status.go:176] ha-409775 status: &{Name:ha-409775 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:11:31.223012  533591 status.go:174] checking status of ha-409775-m02 ...
	I1205 07:11:31.223324  533591 cli_runner.go:164] Run: docker container inspect ha-409775-m02 --format={{.State.Status}}
	I1205 07:11:31.246531  533591 status.go:371] ha-409775-m02 host status = "Stopped" (err=<nil>)
	I1205 07:11:31.246558  533591 status.go:384] host is not running, skipping remaining checks
	I1205 07:11:31.246572  533591 status.go:176] ha-409775-m02 status: &{Name:ha-409775-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:11:31.246596  533591 status.go:174] checking status of ha-409775-m04 ...
	I1205 07:11:31.246883  533591 cli_runner.go:164] Run: docker container inspect ha-409775-m04 --format={{.State.Status}}
	I1205 07:11:31.268560  533591 status.go:371] ha-409775-m04 host status = "Stopped" (err=<nil>)
	I1205 07:11:31.268583  533591 status.go:384] host is not running, skipping remaining checks
	I1205 07:11:31.268591  533591 status.go:176] ha-409775-m04 status: &{Name:ha-409775-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m23.837395215s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 node add --control-plane --alsologtostderr -v 5
E1205 07:13:44.276434  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:13:45.392331  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 node add --control-plane --alsologtostderr -v 5: (1m19.162948728s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-409775 status --alsologtostderr -v 5: (1.037396979s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.07523279s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-889067 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1205 07:15:39.247033  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-889067 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m17.472758722s)
--- PASS: TestJSONOutput/start/Command (77.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-889067 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-889067 --output=json --user=testUser: (5.808574356s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-140868 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-140868 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (103.237198ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"12fef562-dfce-4ea3-a384-abc5878deb8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-140868] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"97d013f4-f400-49a9-91f0-09bf8b475f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"31474312-b0ff-48f1-90d5-ec561095cf6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e675b7e0-6baf-4b0a-9332-38612e331548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig"}}
	{"specversion":"1.0","id":"9788ecfd-2d60-4a8b-bb21-77e4cf6df497","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube"}}
	{"specversion":"1.0","id":"94c8be38-5453-49c2-b67b-1bf1e29de1f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3e2f7ae5-0549-4203-ae25-e27e89a19f4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7ed99313-6c98-4854-9def-f2b89c5cb323","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-140868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-140868
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-453086 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-453086 --network=: (34.289959938s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-453086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-453086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-453086: (2.155589973s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.47s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-580913 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-580913 --network=bridge: (31.990476731s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-580913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-580913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-580913: (2.12361361s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.14s)

                                                
                                    
x
+
TestKicExistingNetwork (35.59s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1205 07:17:08.762453  444147 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1205 07:17:08.778551  444147 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1205 07:17:08.779414  444147 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1205 07:17:08.779449  444147 cli_runner.go:164] Run: docker network inspect existing-network
W1205 07:17:08.795375  444147 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1205 07:17:08.795405  444147 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1205 07:17:08.795422  444147 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1205 07:17:08.795536  444147 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 07:17:08.812484  444147 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e1bc6b978299 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:be:9b:c4:3d:55} reservation:<nil>}
I1205 07:17:08.812846  444147 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002585050}
I1205 07:17:08.812876  444147 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1205 07:17:08.812930  444147 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1205 07:17:08.881224  444147 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-256457 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-256457 --network=existing-network: (33.348667441s)
helpers_test.go:175: Cleaning up "existing-network-256457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-256457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-256457: (2.082852049s)
I1205 07:17:44.329401  444147 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.59s)

                                                
                                    
x
+
TestKicCustomSubnet (37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-017977 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-017977 --subnet=192.168.60.0/24: (34.79969909s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-017977 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-017977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-017977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-017977: (2.170588068s)
--- PASS: TestKicCustomSubnet (37.00s)

                                                
                                    
x
+
TestKicStaticIP (37.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-358171 --static-ip=192.168.200.200
E1205 07:18:44.278516  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:18:45.392317  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-358171 --static-ip=192.168.200.200: (35.013203158s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-358171 ip
helpers_test.go:175: Cleaning up "static-ip-358171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-358171
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-358171: (2.271365322s)
--- PASS: TestKicStaticIP (37.44s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-875808 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-875808 --driver=docker  --container-runtime=crio: (33.412455708s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-878130 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-878130 --driver=docker  --container-runtime=crio: (32.548563526s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-875808
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-878130
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-878130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-878130
E1205 07:20:07.346193  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-878130: (2.13421189s)
helpers_test.go:175: Cleaning up "first-875808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-875808
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-875808: (2.042085592s)
--- PASS: TestMinikubeProfile (71.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-494188 --memory=3072 --mount-string /tmp/TestMountStartserial1863453101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-494188 --memory=3072 --mount-string /tmp/TestMountStartserial1863453101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.690920557s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-494188 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-495927 --memory=3072 --mount-string /tmp/TestMountStartserial1863453101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-495927 --memory=3072 --mount-string /tmp/TestMountStartserial1863453101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.052393265s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-495927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-494188 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-494188 --alsologtostderr -v=5: (1.701529527s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-495927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-495927
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-495927: (1.295813309s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-495927
E1205 07:20:39.248030  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-495927: (7.309258419s)
--- PASS: TestMountStart/serial/RestartStopped (8.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-495927 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793941 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793941 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.828720088s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-793941 -- rollout status deployment/busybox: (4.957387624s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-lgslv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-q45qh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-lgslv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-q45qh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-lgslv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-q45qh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-lgslv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-lgslv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-q45qh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793941 -- exec busybox-7b57f96db7-q45qh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-793941 -v=5 --alsologtostderr
E1205 07:23:44.276455  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:23:45.392302  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-793941 -v=5 --alsologtostderr: (55.496148598s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.20s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-793941 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp testdata/cp-test.txt multinode-793941:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile712998612/001/cp-test_multinode-793941.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941:/home/docker/cp-test.txt multinode-793941-m02:/home/docker/cp-test_multinode-793941_multinode-793941-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m02 "sudo cat /home/docker/cp-test_multinode-793941_multinode-793941-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941:/home/docker/cp-test.txt multinode-793941-m03:/home/docker/cp-test_multinode-793941_multinode-793941-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m03 "sudo cat /home/docker/cp-test_multinode-793941_multinode-793941-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp testdata/cp-test.txt multinode-793941-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile712998612/001/cp-test_multinode-793941-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941-m02:/home/docker/cp-test.txt multinode-793941:/home/docker/cp-test_multinode-793941-m02_multinode-793941.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941 "sudo cat /home/docker/cp-test_multinode-793941-m02_multinode-793941.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941-m02:/home/docker/cp-test.txt multinode-793941-m03:/home/docker/cp-test_multinode-793941-m02_multinode-793941-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m03 "sudo cat /home/docker/cp-test_multinode-793941-m02_multinode-793941-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp testdata/cp-test.txt multinode-793941-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile712998612/001/cp-test_multinode-793941-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941-m03:/home/docker/cp-test.txt multinode-793941:/home/docker/cp-test_multinode-793941-m03_multinode-793941.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941 "sudo cat /home/docker/cp-test_multinode-793941-m03_multinode-793941.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 cp multinode-793941-m03:/home/docker/cp-test.txt multinode-793941-m02:/home/docker/cp-test_multinode-793941-m03_multinode-793941-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 ssh -n multinode-793941-m02 "sudo cat /home/docker/cp-test_multinode-793941-m03_multinode-793941-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-793941 node stop m03: (1.31103616s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793941 status: exit status 7 (539.210817ms)

                                                
                                                
-- stdout --
	multinode-793941
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-793941-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-793941-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr: exit status 7 (533.357283ms)

                                                
                                                
-- stdout --
	multinode-793941
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-793941-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-793941-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:24:17.111586  584046 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:24:17.111705  584046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:24:17.111715  584046 out.go:374] Setting ErrFile to fd 2...
	I1205 07:24:17.111721  584046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:24:17.111969  584046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:24:17.112154  584046 out.go:368] Setting JSON to false
	I1205 07:24:17.112188  584046 mustload.go:66] Loading cluster: multinode-793941
	I1205 07:24:17.112584  584046 config.go:182] Loaded profile config "multinode-793941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:24:17.112602  584046 status.go:174] checking status of multinode-793941 ...
	I1205 07:24:17.113123  584046 cli_runner.go:164] Run: docker container inspect multinode-793941 --format={{.State.Status}}
	I1205 07:24:17.113374  584046 notify.go:221] Checking for updates...
	I1205 07:24:17.132822  584046 status.go:371] multinode-793941 host status = "Running" (err=<nil>)
	I1205 07:24:17.132846  584046 host.go:66] Checking if "multinode-793941" exists ...
	I1205 07:24:17.133161  584046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-793941
	I1205 07:24:17.152515  584046 host.go:66] Checking if "multinode-793941" exists ...
	I1205 07:24:17.152822  584046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:24:17.152881  584046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-793941
	I1205 07:24:17.174611  584046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/multinode-793941/id_rsa Username:docker}
	I1205 07:24:17.278173  584046 ssh_runner.go:195] Run: systemctl --version
	I1205 07:24:17.284674  584046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:24:17.297558  584046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 07:24:17.362003  584046 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-05 07:24:17.352538307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1205 07:24:17.362595  584046 kubeconfig.go:125] found "multinode-793941" server: "https://192.168.67.2:8443"
	I1205 07:24:17.362638  584046 api_server.go:166] Checking apiserver status ...
	I1205 07:24:17.362693  584046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 07:24:17.374257  584046 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	I1205 07:24:17.382593  584046 api_server.go:182] apiserver freezer: "8:freezer:/docker/5d9f8fea0688f0dd97196a2a362d27b91506936adebefbe9142f946e2079b602/crio/crio-a6bda27b0b38204a6e614512382452f0bae0e2c1527ce9b5fc85d71bd82b8f12"
	I1205 07:24:17.382663  584046 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5d9f8fea0688f0dd97196a2a362d27b91506936adebefbe9142f946e2079b602/crio/crio-a6bda27b0b38204a6e614512382452f0bae0e2c1527ce9b5fc85d71bd82b8f12/freezer.state
	I1205 07:24:17.390104  584046 api_server.go:204] freezer state: "THAWED"
	I1205 07:24:17.390132  584046 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1205 07:24:17.398446  584046 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1205 07:24:17.398475  584046 status.go:463] multinode-793941 apiserver status = Running (err=<nil>)
	I1205 07:24:17.398496  584046 status.go:176] multinode-793941 status: &{Name:multinode-793941 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:24:17.398513  584046 status.go:174] checking status of multinode-793941-m02 ...
	I1205 07:24:17.398839  584046 cli_runner.go:164] Run: docker container inspect multinode-793941-m02 --format={{.State.Status}}
	I1205 07:24:17.415750  584046 status.go:371] multinode-793941-m02 host status = "Running" (err=<nil>)
	I1205 07:24:17.415774  584046 host.go:66] Checking if "multinode-793941-m02" exists ...
	I1205 07:24:17.416078  584046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-793941-m02
	I1205 07:24:17.433235  584046 host.go:66] Checking if "multinode-793941-m02" exists ...
	I1205 07:24:17.433564  584046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 07:24:17.433612  584046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-793941-m02
	I1205 07:24:17.451761  584046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21997-441321/.minikube/machines/multinode-793941-m02/id_rsa Username:docker}
	I1205 07:24:17.559366  584046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 07:24:17.571841  584046 status.go:176] multinode-793941-m02 status: &{Name:multinode-793941-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:24:17.571876  584046 status.go:174] checking status of multinode-793941-m03 ...
	I1205 07:24:17.572171  584046 cli_runner.go:164] Run: docker container inspect multinode-793941-m03 --format={{.State.Status}}
	I1205 07:24:17.588890  584046 status.go:371] multinode-793941-m03 host status = "Stopped" (err=<nil>)
	I1205 07:24:17.588914  584046 status.go:384] host is not running, skipping remaining checks
	I1205 07:24:17.588926  584046 status.go:176] multinode-793941-m03 status: &{Name:multinode-793941-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-793941 node start m03 -v=5 --alsologtostderr: (7.869744027s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-793941
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-793941
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-793941: (25.056076141s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793941 --wait=true -v=5 --alsologtostderr
E1205 07:25:08.470547  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:25:22.320430  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:25:39.247925  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793941 --wait=true -v=5 --alsologtostderr: (48.216420851s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-793941
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-793941 node delete m03: (5.01554445s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-793941 stop: (23.870610354s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793941 status: exit status 7 (92.143122ms)

                                                
                                                
-- stdout --
	multinode-793941
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-793941-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr: exit status 7 (102.537298ms)

                                                
                                                
-- stdout --
	multinode-793941
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-793941-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 07:26:09.595292  591863 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:26:09.595428  591863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:26:09.595439  591863 out.go:374] Setting ErrFile to fd 2...
	I1205 07:26:09.595444  591863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:26:09.595695  591863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:26:09.595874  591863 out.go:368] Setting JSON to false
	I1205 07:26:09.595918  591863 mustload.go:66] Loading cluster: multinode-793941
	I1205 07:26:09.595990  591863 notify.go:221] Checking for updates...
	I1205 07:26:09.596927  591863 config.go:182] Loaded profile config "multinode-793941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:26:09.596948  591863 status.go:174] checking status of multinode-793941 ...
	I1205 07:26:09.597517  591863 cli_runner.go:164] Run: docker container inspect multinode-793941 --format={{.State.Status}}
	I1205 07:26:09.615921  591863 status.go:371] multinode-793941 host status = "Stopped" (err=<nil>)
	I1205 07:26:09.615946  591863 status.go:384] host is not running, skipping remaining checks
	I1205 07:26:09.616001  591863 status.go:176] multinode-793941 status: &{Name:multinode-793941 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 07:26:09.616040  591863 status.go:174] checking status of multinode-793941-m02 ...
	I1205 07:26:09.616347  591863 cli_runner.go:164] Run: docker container inspect multinode-793941-m02 --format={{.State.Status}}
	I1205 07:26:09.646332  591863 status.go:371] multinode-793941-m02 host status = "Stopped" (err=<nil>)
	I1205 07:26:09.646357  591863 status.go:384] host is not running, skipping remaining checks
	I1205 07:26:09.646364  591863 status.go:176] multinode-793941-m02 status: &{Name:multinode-793941-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793941 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793941 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.762164517s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793941 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.46s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-793941
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793941-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-793941-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.287611ms)

                                                
                                                
-- stdout --
	* [multinode-793941-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-793941-m02' is duplicated with machine name 'multinode-793941-m02' in profile 'multinode-793941'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793941-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793941-m03 --driver=docker  --container-runtime=crio: (34.910118064s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-793941
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-793941: exit status 80 (348.642837ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-793941 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-793941-m03 already exists in multinode-793941-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-793941-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-793941-m03: (2.073413347s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.48s)

                                                
                                    
x
+
TestPreload (122.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-428837 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-428837 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m0.911060513s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-428837 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-428837 image pull gcr.io/k8s-minikube/busybox: (2.1226067s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-428837
E1205 07:28:44.277153  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:28:45.391577  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-428837: (5.955572727s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-428837 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-428837 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.077274375s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-428837 image list
helpers_test.go:175: Cleaning up "test-preload-428837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-428837
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-428837: (2.459884022s)
--- PASS: TestPreload (122.78s)

                                                
                                    
x
+
TestScheduledStopUnix (110.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-579258 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-579258 --memory=3072 --driver=docker  --container-runtime=crio: (33.653098487s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-579258 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:30:16.344149  605999 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:30:16.344323  605999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:30:16.344375  605999 out.go:374] Setting ErrFile to fd 2...
	I1205 07:30:16.344395  605999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:30:16.344688  605999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:30:16.344983  605999 out.go:368] Setting JSON to false
	I1205 07:30:16.345133  605999 mustload.go:66] Loading cluster: scheduled-stop-579258
	I1205 07:30:16.345563  605999 config.go:182] Loaded profile config "scheduled-stop-579258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:30:16.345704  605999 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/config.json ...
	I1205 07:30:16.345937  605999 mustload.go:66] Loading cluster: scheduled-stop-579258
	I1205 07:30:16.346097  605999 config.go:182] Loaded profile config "scheduled-stop-579258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-579258 -n scheduled-stop-579258
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-579258 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:30:16.802005  606087 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:30:16.802230  606087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:30:16.802257  606087 out.go:374] Setting ErrFile to fd 2...
	I1205 07:30:16.802278  606087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:30:16.802728  606087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:30:16.803086  606087 out.go:368] Setting JSON to false
	I1205 07:30:16.805397  606087 daemonize_unix.go:73] killing process 606022 as it is an old scheduled stop
	I1205 07:30:16.805546  606087 mustload.go:66] Loading cluster: scheduled-stop-579258
	I1205 07:30:16.805997  606087 config.go:182] Loaded profile config "scheduled-stop-579258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:30:16.806126  606087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/config.json ...
	I1205 07:30:16.806353  606087 mustload.go:66] Loading cluster: scheduled-stop-579258
	I1205 07:30:16.806566  606087 config.go:182] Loaded profile config "scheduled-stop-579258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1205 07:30:16.814245  444147 retry.go:31] will retry after 61.105µs: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.814455  444147 retry.go:31] will retry after 201.142µs: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.815313  444147 retry.go:31] will retry after 274.094µs: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.816610  444147 retry.go:31] will retry after 225.273µs: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.817762  444147 retry.go:31] will retry after 734.292µs: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.818894  444147 retry.go:31] will retry after 543.059µs: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.820002  444147 retry.go:31] will retry after 868.996µs: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.821145  444147 retry.go:31] will retry after 1.961177ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.823310  444147 retry.go:31] will retry after 1.629461ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.826152  444147 retry.go:31] will retry after 3.715364ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.830347  444147 retry.go:31] will retry after 5.508124ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.836582  444147 retry.go:31] will retry after 11.464336ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.848760  444147 retry.go:31] will retry after 7.301115ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.856984  444147 retry.go:31] will retry after 21.216541ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
I1205 07:30:16.879250  444147 retry.go:31] will retry after 38.441399ms: open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-579258 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1205 07:30:39.249280  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-579258 -n scheduled-stop-579258
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-579258
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-579258 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1205 07:30:42.735000  606453 out.go:360] Setting OutFile to fd 1 ...
	I1205 07:30:42.735269  606453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:30:42.735282  606453 out.go:374] Setting ErrFile to fd 2...
	I1205 07:30:42.735288  606453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1205 07:30:42.735845  606453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-441321/.minikube/bin
	I1205 07:30:42.736144  606453 out.go:368] Setting JSON to false
	I1205 07:30:42.736277  606453 mustload.go:66] Loading cluster: scheduled-stop-579258
	I1205 07:30:42.736661  606453 config.go:182] Loaded profile config "scheduled-stop-579258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1205 07:30:42.736752  606453 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/scheduled-stop-579258/config.json ...
	I1205 07:30:42.736990  606453 mustload.go:66] Loading cluster: scheduled-stop-579258
	I1205 07:30:42.737146  606453 config.go:182] Loaded profile config "scheduled-stop-579258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-579258
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-579258: exit status 7 (66.983876ms)

                                                
                                                
-- stdout --
	scheduled-stop-579258
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-579258 -n scheduled-stop-579258
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-579258 -n scheduled-stop-579258: exit status 7 (70.188986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-579258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-579258
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-579258: (4.83248104s)
--- PASS: TestScheduledStopUnix (110.09s)

                                                
                                    
x
+
TestInsufficientStorage (13.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-743161 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-743161 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.706501185s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b9b528e7-fafc-4b43-a7d8-8d47592ffe59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-743161] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"930252ad-7d54-4f4d-bb4a-2eaac3d96750","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"c46956f6-7d3c-476a-aca9-28c18aa2ff60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66992274-2d30-4621-a379-81c91c63e94e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig"}}
	{"specversion":"1.0","id":"c2fac41f-c33f-4a9b-9611-ba2f5031cf14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube"}}
	{"specversion":"1.0","id":"db0cb7bc-3a47-4b65-83cc-f142d805c04c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5bbdfc56-120d-4f90-ac95-9fa8f42ed398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a03df3a8-d6f6-47ce-be37-c1b7af2dc469","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8b3221c0-44e7-42a2-9a99-9fb6e2338ad0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6041f908-c552-481c-ab27-f59a60335df5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a531949e-7884-40fa-9f5c-cf6454485f20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"cb0ab5c5-c7d0-47a0-8f16-49f06831384f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-743161\" primary control-plane node in \"insufficient-storage-743161\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e7d638f-5de3-45da-b4a2-8fb64c950a41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"277f1085-a4e8-4a9c-92cd-7c7b49101a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"68c36b07-7f0c-4aac-8fd3-d2541a84773a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-743161 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-743161 --output=json --layout=cluster: exit status 7 (297.626963ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-743161","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-743161","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:31:43.705003  608159 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-743161" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-743161 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-743161 --output=json --layout=cluster: exit status 7 (313.222657ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-743161","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-743161","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 07:31:44.019837  608227 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-743161" does not appear in /home/jenkins/minikube-integration/21997-441321/kubeconfig
	E1205 07:31:44.030453  608227 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/insufficient-storage-743161/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-743161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-743161
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-743161: (1.964094377s)
--- PASS: TestInsufficientStorage (13.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (299.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3062307140 start -p running-upgrade-685187 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3062307140 start -p running-upgrade-685187 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.396298746s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-685187 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 07:40:39.247297  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:41:48.472921  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:42:02.322070  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:43:44.276280  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:43:45.391480  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-685187 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.285848348s)
helpers_test.go:175: Cleaning up "running-upgrade-685187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-685187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-685187: (1.976124969s)
--- PASS: TestRunningBinaryUpgrade (299.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.900349426 start -p missing-upgrade-168812 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.900349426 start -p missing-upgrade-168812 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.913728579s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-168812
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-168812
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-168812 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-168812 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.450430127s)
helpers_test.go:175: Cleaning up "missing-upgrade-168812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-168812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-168812: (2.910119691s)
--- PASS: TestMissingContainerUpgrade (123.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-587853 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-587853 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.473678ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-587853] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-441321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-441321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-587853 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-587853 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (46.874809289s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-587853 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.924900108s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-587853 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-587853 status -o json: exit status 2 (366.814674ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-587853","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-587853
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-587853: (2.259255634s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-587853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.633507495s)
--- PASS: TestNoKubernetes/serial/Start (9.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-441321/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-587853 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-587853 "sudo systemctl is-active --quiet service kubelet": exit status 1 (443.169731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (2.239124569s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-587853
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-587853: (1.297113762s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-587853 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-587853 --driver=docker  --container-runtime=crio: (7.067756504s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-587853 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-587853 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.984164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.393215135 start -p stopped-upgrade-837565 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.393215135 start -p stopped-upgrade-837565 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.499327264s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.393215135 -p stopped-upgrade-837565 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.393215135 -p stopped-upgrade-837565 stop: (1.250792942s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-837565 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 07:35:39.247403  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:36:47.348219  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:44.277219  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-787602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 07:38:45.392366  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/addons-640282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-837565 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.750354127s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-837565
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-837565: (1.813051572s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.81s)

                                                
                                    
x
+
TestPause/serial/Start (78.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-908773 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-908773 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.682700847s)
--- PASS: TestPause/serial/Start (78.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-908773 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 07:45:39.247616  444147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-441321/.minikube/profiles/functional-252233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-908773 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.52332381s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.55s)

                                                
                                    

Test skip (35/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.14
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1205 06:11:14.142785  444147 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1205 06:11:14.237245  444147 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
W1205 06:11:14.283734  444147 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-885973 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-885973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-885973
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard